MIT: Optimizing solar farms with ‘Smart Drones’


mit-raptor-maps-01_0As drones increasingly take on the job of inspecting growing solar farms, Raptor Maps’ software makes sense of the data they collect. Image courtesy of Raptor Maps

MIT spinoff Raptor Maps uses machine-learning software to improve the maintenance of solar panels.

As the solar industry has grown, so have some of its inefficiencies. Smart entrepreneurs see those inefficiencies as business opportunities and try to create solutions around them. Such is the nature of a maturing industry.

One of the biggest complications emerging from the industry’s breakneck growth is the maintenance of solar farms. Historically, technicians have run electrical tests on random sections of solar cells in order to identify problems. In recent years, the use of drones equipped with thermal cameras has improved the speed of data collection, but now technicians are being asked to interpret a never-ending flow of unstructured data.

That’s where Raptor Maps comes in. The company’s software analyzes imagery from drones and diagnoses problems down to the level of individual cells. The system can also estimate the costs associated with each problem it finds, allowing technicians to prioritize their work and owners to decide what’s worth fixing.

“We can enable technicians to cover 10 times the territory and pinpoint the most optimal use of their skill set on any given day,” Raptor Maps co-founder and CEO Nikhil Vadhavkar says. “We came in and said, ‘If solar is going to become the number one source of energy in the world, this process needs to be standardized and scalable.’ That’s what it takes, and our customers appreciate that approach.”

Raptor Maps processed the data of 1 percent of the world’s solar energy in 2018, amounting to the energy generated by millions of panels around the world. And as the industry continues its upward trajectory, with solar farms expanding in size and complexity, the company’s business proposition only becomes more attractive to the people driving that growth.

Picking a path

Raptor Maps was founded by Eddie Obropta ’13 SM ’15, Forrest Meyen SM ’13 PhD ’17, and Vadhavkar, who was a PhD candidate at MIT between 2011 and 2016. The former classmates had worked together in the Human Systems Laboratory of the Department of Aeronautics and Astronautics when Vadhavkar came to them with the idea of starting a drone company in 2015.

The founders were initially focused on the agriculture industry. The plan was to build drones equipped with high-definition thermal cameras to gather data, then create a machine-learning system to gain insights on crops as they grew. While the founders began the arduous process of collecting training data, they received guidance from MIT’s Venture Mentoring Service and the Martin Trust Center. In the spring of 2015, Raptor Maps won the MIT $100K Launch competition.

But even as the company began working with the owners of two large farms, Obropta and Vadhavkar were unsure of their path to scaling the company. (Meyen left the company in 2016.) Then, in 2017, they made their software publicly available and something interesting happened.

They found that most of the people who used the system were applying it to thermal images of solar farms instead of real farms. It was a message the founders took to heart.

“Solar is similar to farming: It’s out in the open, 2-D, and it’s spread over a large area,” Obropta says. “And when you see [an anomaly] in thermal images on solar, it usually means an electrical issue or a mechanical issue — you don’t have to guess as much as in agriculture. So we decided the best use case was solar. And with a big push for clean energy and renewables, that aligned really well with what we wanted to do as a team.”

Obropta and Vadhavkar also found themselves on the right side of several long-term trends as a result of the pivot. The International Energy Agency has proposed that solar power could be the world’s largest source of electricity by 2050. But as demand grows, investors, owners, and operators of solar farms are dealing with an increasingly acute shortage of technicians to keep the panels running near peak efficiency.

Since deciding to focus on solar exclusively around the beginning of 2018, Raptor Maps has found success in the industry by releasing its standards for data collection and letting customers — or the many drone operators the company partners with — use off-the-shelf hardware to gather the data themselves. After the data is submitted to the company, the system creates a detailed map of each solar farm and pinpoints any problems it finds.

“We run analytics so we can tell you, ‘This is how many solar panels have this type of issue; this is how much the power is being affected,’” Vadhavkar says. “And we can put an estimate on how many dollars each issue costs.”

The model allows Raptor Maps to stay lean while its software does the heavy lifting. In fact, the company’s current operations involve more servers than people.

The tiny operation belies a company that’s carved out a formidable space for itself in the solar industry. Last year, Raptor Maps processed four gigawatts worth of data from customers on six different continents. That’s enough energy to power nearly 3 million homes.

Vadhavkar says the company’s goal is to grow at least fivefold in 2019 as several large customers move to make the software a core part of their operations. The team is also working on getting its software to generate insights in real time using graphical processing units on the drone itself as part of a project with the multinational energy company Enel Green Power.

Ultimately, the data Raptor Maps collect are taking the uncertainty out of the solar industry, making it a more attractive space for investors, operators, and everyone in between.

“The growth of the industry is what drives us,” Vadhavkar says. “We’re directly seeing that what we’re doing is impacting the ability of the industry to grow faster. That’s huge. Growing the industry — but also, from the entrepreneurial side, building a profitable business while doing it — that’s always been a huge dream.”

Advertisements

Next-Gen Lithium-Ion Batteries – Combining Graphene + Silicon Could it be the Key?


Battery

Researchers have long been investigating the use of silicon in lithium-ion batteries, as it has the potential to greatly increase storage capacity compared to graphite, the material used in most conventional lithium-ion batteries. By some estimates, silicon could boast a lithium storage capacity of 4,200 mAh/g—11 times that of graphite.

However, despite its benefits, silicon comes with its own challenges.

“When you store a lot of lithium ion into your silicon you actually physically extend the volume of silicon to about 3 to 3.8 times its original volume—so that is a lot of expansion,” explained Bor Jang, PhD, in an exclusive interview with R&D Magazine. “That by itself is not a big problem, but when you discharge your battery—like when you open your smart phone—the silicon shrinks. Then when you recharge your battery the silicon expands again. This repeated expansion and shrinkage leads to the breakdown of the particles inside of your battery so it loses its capacity.”

Jang offers one solution—graphene, a single layer sheet of carbon atoms tightly bound in a hexagonal honeycomb lattice.

“We have found that graphene plays a critical role in protecting the silicon,” said Jang, the CEO and Chief Scientist of Global Graphene Group. The Ohio-based advanced materials organization has created GCA-II-N, a graphene and silicon composite anode for use in lithium-ion batteries.

The innovation—which was a 2018 R&D 100 Award winner—has the potential to make a significant impact in the energy storage space. Jang shared more about graphene, GCA-II-N and its potential applications in his …

Interview with R&D Magazine:

 

           Photo Credit: Global Graphene Group

 

R&D Magazine: Why is graphene such a good material for energy storage?

Jang: From the early beginning when we invited graphene back in 2002 we realized that graphene has certain very unique properties. For example, it has very high electrical conductivity, very high thermal conductivity, it has very high strength—in fact it is probably the strongest material known to mankind naturally. We thought we would be able to make use of graphene to product the anode material than we can significantly improve not only the strength of the electrode itself, but we are also able to dissipate the heat faster, while also reducing the changes for the battery to catch fire or explode.

Also graphene is extremely thin—a single layer graphene is 0.34 nanometer (nm). You can imagine that if you had a fabric that was as thin as 0.34 nanometers in thickness, than you could use this material to wrap around just about anything. So it is a very good protection material in that sense. That is another reason for the flexibility of this graphene material.

 

 

BatteryRead More: Talga’s graphene silicon product extends capacity of Li-ion battery anode

Another interesting feature of graphene is that is a very high specific surface area. For instance if I give you 1.5 grams of single layer graphene it will be enough to cover an entire football stadium. There is a huge amount of surface area per unit weight with this material.

That translates into another interesting property in the storage area. In that field that is a device called supercapacitors or ultracapacitors. The operation of supercapacitors depends upon conducting surface areas, like graphene or activated carbon. These graphene sheets have, to be exact, 2630 meters squared per gram. That would give you, in principle, a very high capacity per unit gram of this material when you use it as an electron material for supercapacitors. There is are so many properties associated with graphene for energy applications, those are just examples, I could talk about this all day!

 

 

R&D Magazine: Where is the team currently with the GCA-II-N and what are the next steps for this project?

Jang: Last year we began to sell the product. In Dayton, OH, where we are situated at the moment we have a small-scale manufacturing facility. It is now about a 50-metric-ton capacity facility and we can easily scale it up. We have been producing mass qualities of this and then delivering them to some of the potential customers for validation. We are basically in the customer validation stage for this business right now.

We will continue to do research and development for this project. We will eventually manufacture the batteries here in the U.S., but at the moment we are doing the anode materials only.

R&D Magazine: What types of customers are showing interest in this technology?

Jang: Electrical vehicles are a big area that is growing rapidly, particularly in areas in Asia such as China. The electrical vehicle industry is taking the driver’s seat and is driving the growth of this business worldwide right now. E-bikes and electronic scooters are another rapidly growing business where this could be used.

Another example is your smart phone. Right now, if you continue to use your phone you may be able to last for half a day or maybe a whole day if you push it. This technology has the ability to double the amount of energy that could be stored in your battery. Electronic devices is another big area for application of this technology. 

A third area is in the energy storage business, it could be utilized to store solar energy or wind energy after it has been captured. Lithium-ion batteries are gaining a lot of ground in this market right now.

Right now, another rapidly growing area is the drone. Drones are used, not only for fun, but for agricultural purposes or for surveillance purposes, such as during natural disasters.  Drones are seeing a lot of applications right now and batteries are very important part of that.

R&D Magazine: Are there any challenges to working with graphene?

Jang: One of the major challenges is that graphene by itself is still a relatively high cost. We are doing second-generation processes right now, and I think in a couple of years we should be able to significantly reduce the cost of graphene. We are also working on a third generation of processes that would allow us to reduce the cost even further. That is a major obstacle to large-scale commercialization of all graphene applications.

The second challenge is the notion of graphene as a so-called ‘nanomaterial’ in thickness that a lot customers find it difficult to disperse in water or disperse in organic solvent or plastic in order to combine graphene with other types of materials, make a composite out of it. Therefor people are resistant to use it. We have found a way to overcome this either real challenge, or perceived challenge. We can do that for a customer and then ship that directly to the customer.

There is also an education challenge. It is sometimes difficult to convince engineers, they want to stick with the materials they are more familiar with, even though the performance is better with graphene. That is a barrier as well. However, I do think it is becoming more well known.

Laura Panjwani
Editor-in-chief R & D Magazine

Super-stable antinomy carbon composite anodes to boost potassium-ion battery storage performance


id51930_1

Potassium-ion batteries (PIBs) have been considered as promising alternatives to lithium-ion batteries due to the rich natural abundance of potassium (K) and similar redox potential with Li+/Li.

However, due to the large K ion radius and slow reaction dynamics, the previously reported PIB anode materials (carbon-based materials, alloy-based anodes such as tin and antimony, metal oxides, etc.) suffer from a low capacity and fast capacity decay.
In order to achieve a high capacity and excellent cycle stability for K storage process, rational design of the electrode materials and proper selection of the electrolytes should be considered simultaneously.
Recently, two research teams led by Prof. Chunsheng Wang and Prof. Michael R. Zachariah from the University of Maryland, College Park, have designed and fabricated a novel antimony (Sb) carbon composite PIB anode via a facile and scalable electrospray-assisted strategy and found that this anode delivered super high specific capacities as well as cycling stability in a highly concentrated electrolyte (4M KTFSI/EC+DEC).
This work has been published in Energy and Environmental Science (“Super Stable Antimony-carbon composite anodes for potassium-ion batteries”).

 

id51930_1
Figure 1. Schematic illustration of electrospray-assisted strategy for fabricating antimony @carbon sphere network electrode materials. (© Royal Society of Chemistry)
We have successfully fabricated a novel antimony carbon composite with small Sb nanoparticles uniformly confined in the carbon sphere network (Sb@CSN) via a facile and scalable electrospray-assisted strategy.
Such a unique nanostructure can effectively mitigate the deleteriously mechanical damage from large volume changes and provide a highly conductive framework for fast electron transport during alloy/de-alloy cycling process.
Alongside the novel structural design of the anode material, formation of a robust solid-electrolyte-interphase (SEI) on the anode is crucially important to achieve its long-term cycling stability.
The formation of a robust SEI on the anode material is determined by both the surface chemistries of active electrode materials as well as electrolyte compositions such as salt anion types and concentrations.
Therefore, designing a proper electrolyte is extremely important for the anode to achieve a high cycling stability.
In our study, we have for the first time developed a stable and safe electrolyte of highly concentrated 4M KTFSI/EC+DEC for PIBs to promote the formation of a stable and robust KF-rich SEI layer on an Sb@CSN anode, which guarantees stable electrochemical alloy/de-alloy reaction dynamics during long-time cycling process.
Cycling performance of antimony carbon sphere network electrode materials
Figure 2. Cycling performance of antimony carbon sphere network electrode materials at 200mA/g current density in the highly concentrated electrolyte (4M KTFSI/EC+DEC). (© Royal Society of Chemistry)
In the optimized 4M KTFSI/EC+DEC electrolyte, the Sb@CSN composite delivers excellent reversible capacity of 551 mAh/g at 100 mA/g over 100 cycles with a capacity decay of 0.06% per cycle from the 10st to 100th cycling and 504 mAh/g even at 200 mA/g after 220 cycling. This demonstrates the best electrochemical performances with the highest capacity and longest cycle life when compared with all K-ion batteries anodes reported to date.
The electrochemical reaction mechanism was further revealed by density functional theory (DTF) calculation to support such excellent Potassium-storage properties.
Capacity comparison of Sb@CSN anode with previous reported anodes in potassium ion batteries
Figure 3. Capacity comparison of Sb@CSN anode with previous reported anodes in potassium ion batteries. (© Royal Society of Chemistry)
In conclusion, these outstanding performances should be attributed to the novel nanostructure of Sb nanoparticles uniformly encapsulated into conductive carbon network and the formation of a more stable and robust KF-rich SEI layer on Sb@CSN in the optimized 4M KTFSI electrolyte.
These encouraging results will significantly promote the deep understanding of the fundamental electrochemistry in Potassium-ion batteries as well as rational development of efficient electrolyte systems for next generation high-performance Potassium-ion batteries.
Yong Yang, Research Associate, Prof. Zachariah Research Group, Department of Chemical and Environmental Engineering, University of California, Riverside

Atoms to Nanoparticles to Living Matter – The evolutionary path of the Synthetic Nanoparticle World is Just Beginning


nanoparticles220110124-nanoparticles

Atoms form molecules or crystals using their outermost electrons through interactions between single elements or multiple elements. Various molecules and crystals can assemble in different ways to form organic structures, such as DNA and cell-membrane phospholipids, or various tissues. These tissues are the bases of organs in a living organism.

Whereas the evolutionary path of the atomic world has occurred over billions of years, the evolutionary path of the synthetic nanoparticle world has just begun. Man-made nanoparticle assemblies are beginning to revolutionize different fields including thermoelectronics, photoelectronics, catalysts, energy generation and storage, as well as medical diagnostics and therapeutics.
nanoparticles1fhnA review article in Advanced Functional Materials (“From Atoms to Lives: The Evolution of Nanoparticle Assemblies”) uses the unconventional approach of comparing the atomic and nanoparticle worlds to describe the development of nanoparticle research during the past several decades and provide pointers as to what the future might hold.
The authors discuss in great detail corresponding classifications, such as atoms and nanoparticles; molecules and molecular type nanoparticle assemblies; crystals and crystalline-type nanoparticle assemblies; biological organisms and analogous artificial assemblies; and biologically functional nanoparticle complex assemblies.

 

The evolutionary path of the atomic world and architectural evolution of nanoparticle world
The evolutionary path of the atomic world and architectural evolution of nanoparticle world. (Reprinted with permission by Wiley-VCH Verlag) (click on image to enlarge)

 

Similarities and differences between atoms and nanoparticles

The configuration of nanoparticles is similar to that of atoms. An atom has a nucleus and surrounding electron clouds, while an nanoparticle consists of a core element and coating ligands. The electrons balance the positive charge of nuclei and are responsible for interactions with other atoms. The ligands of nanoparticles maintain the stability of the core materials, render them soluble in certain solvents, and enable their interactions with target objects.
Atoms can gain or lose electrons to become ions. Modifying the coating ligands of nanoparticles can change the surfaces of the nanoparticles to become positively or negatively charged or even enable the formation of crystalline or defective structures.
Atoms can be categorized into different groups with specific properties, such as metals, semiconductors, and inert elements, depending on their numbers of protons and electrons. nanoparticles can similarly be categorized into metallic, semiconductor, and insulator types according to the band theory of the nanoparticle cores.
However, nanoparticles are unlike atoms in that their physical and chemical properties, such as size, surface composition, surface charges, flexibility, and shapes govern the functionalities of nanoparticles in various applications. Generally, The advantage of using nanoparticles as building blocks instead of atoms is their complex and tunable configurations and functions.
Usually, the center of the atomic nucleus is considered as hard core and the boundary of the surrounding cloud of electrons can have flexibility to engender polarization or deformation in ionized and covalent bonding states. In contrast, nanoparticles show more flexibility and coordinability which manifest for independent existence at states, deformation controllability, flowability and designability.

 

Similarities and differences between atoms and nanoparticles
a) Similarities and b) differences between atoms and nanoparticles. (Reprinted with permission by Wiley-VCH Verlag) (click on image to enlarge)

 

The essential characterization of nanoparticles is the adjustable size at least one direction of less than 100 nm. While the atomic size usually depend on the state and context of the boundary of the surrounding cloud of electrons, and its size vary in a predictable and regular manner across the periodic table. In terms of surface composition, the periphery of an atomic nucleus has only electrons.
However, the surface of nanoparticles can accept coatings of ligands or shells to form interfacial layers including metal/inorganic shells, polymetallic salts, organic chains, functionalized derivatives, polymers, cell membranes, and even biofriendly DNA sequences.
Geometrically, atoms can be modeled as spherical entities while nanoparticles can have various shapes and structures, i.e., spherical and nonspherical. Nonspherical nanoparticles (cubes, plates, rods) play a key role in the nanoparticle field because they can lead to specific superstructures and functions.
Core–shell nanoparticles is another important group of nanoparticles that differs from atoms. A wide range of different materials and combinations can be used as their constituent components, and the choice of the core or shell materials is strongly dependent on the end use.

Molecules and molecular-type assemblies

The world is not built from individual atoms or nanoparticles – it is their assemblies that form our world and the novel nanoworld.
Atoms bond in different ways to form molecules where they share or exchange electron density. Atoms can form small inert molecules and optically, thermally, or functionally responsive molecules and macromolecules as well as natural biopolymers such as proteins that are fundamental to biological structure and function.
Extending the comparison of atoms to nanoparticles, the authors suggest that nanoparticles with non-oriented interparticle interactions can be regarded as molecular-type assemblies. This definition provides a simple way to broadly classify nanoparticles forming molecular-type assemblies.
Here, the formed molecular-type nanoparticle assemblies are actually the unregular assembly of nanoparticles. The random aggregation among nanoparticles occur in an arbitrary activated site similar to the construction of amorphous atomic materials, such as amorphous metals, and glasses, with disordered atomic-scale structure.
Different types of molecules and corresponding molecular-type assemblies
a) Different types of molecules. b) Corresponding molecular-type assemblies. (Reprinted with permission by Wiley-VCH Verlag) (click on image to enlarge)

 

Crystals and crystalline-type assemblies

Compared with molecules, crystals have long periodical and orientational symmetry. Generally, the symmetry in crystals results from periodical and orientational arrangements of mono-atomic/ polyatomic interactions via ionic, covalent, or metallic bonds.
Nanoparticle superlattices have similar characteristics in terms of their structure and interparticle interactions. nanoparticles with oriented interparticle interactions can be classified as crystalline-type assemblies. Under this framework, single/binary/multicomponent nanoparticle superlattices correspond to monoatomic/ diatomic/ polyatomic crystals.
Compared to the interaction bonds between atoms in crystals, the interparticle interaction forces of crystalline-type nanoparticle assemblies tend to bond weakly through noncovalent interaction, such as van der Waals interactions, electrostatic interactions, and hydrogen bonds.
A key feature of this type of nanoparticle assembly is that the interparticle interactions are isotropic and cannot be individually modified. As demonstrated in crystalline-type nanoparticle assemblies, interparticle interactions can now be tuned to provide orientational and translational properties.

Biological organisms and analogous artificial assemblies

Prebiotic structures of amines, amino acids, thiols, and other molecules in Earth’s primordial era facilitated the formation of biological assemblies such as phospholipids and DNA. These molecules combined chemically to form the essential part or basic structure of organelles and tissue structures. There is some indication that biological organisms played an important role in primordial Earth conditions to facilitate living organism progress.
The authors argue that biological-like artificial assemblies such as DNA, the phospholipid layers of cell membranes, and the hierarchical structures of living organisms can be associated with the functional properties of nanoparticles present in a wide variety of catalytic, optical, and biomedical processes.
Different types of biological tissue structures and analogous nanoparticle assemblies
Different types of biological tissue structures and analogous nanoparticle assemblies. a) The chirality screw structures of DNA. b) The phospholipid layer structures of the cell membrane. c) The hierarchical structures of bone tissues. (Reprinted with permission by Wiley-VCH Verlag) (click on image to enlarge)
The fusion of inorganic nanoparticles with living matter can provide new systems that combine the advantages of both worlds. For instance, researchers have fabricated self-photosensitizing nonphotosynthetic bacteria by hybridizing inorganic semiconductors so that they could efficiently harvest sunlight with biocatalysts specializing in energy transformation (“Self-photosensitization of nonphotosynthetic bacteria for solar-to-chemical production”).
In this work, the researchers grew the nonphotosynthetic CO2-reducing bacterium Moorella thermoacetica and bioprecipitated it with CdS nanoparticles, transferring CO2 into acetic acid via photosynthesis. This Moorella thermoacetica–CdS system had better performance and greater potential applications due to its ∼90% photosynthetic transfer rate, nonenergy loss during the dark periods, and tunable light flux.
Concluding their review, the authors point out that current nanoparticle assemblies are static, i.e., they cannot self-respond/adapt to changing environments.
“Beyond dynamic nanoparticle assemblies, an ambitious target for nanoparticle assembly research is to produce, or create, living nanoparticle assemblies. Certain experiments have already been performed. nanoparticle-assisted biosystems, bio-assisted nanoparticle systems, and bio/nanoparticle co-assistance systems are gradually becoming more prominent, suggesting a totally different future for nanoparticle assemblies.”
Three categories of functional assemblies, that are also the basic features of living organism – for motion, for self-repair, and for self-regulation/ adaptation – will drive the future of nanoparticle assemblies and open the door to an era where DNA-based living nanoscale materials are ‘genetically’ modifiable and can undergo structural ‘evolution’.
By Copyright © Nanowerk

Graphene research Breakthrough – Graphene produced with unique edge pattern – Crucial step toward using Graphene for ‘Nanoelectronic components’


stablegraphene2-researcherswDifferent patterns are formed at the edges of nanographene. Zigzags are particularly interesting — and particularly unstable. FAU researchers have now succeeded in creating stable layers of carbon with this pattern on their edges. Credit: FAU/Konstantin Amsharov

Bay, fjord, cove, armchair and zigzag—chemists use terms such as these to describe the shapes taken by the edges of nanographene.

Graphene consists of a single-layered carbon structure in which each carbon atom is surrounded by three others. This creates a pattern reminiscent of a honeycomb, with atoms in each of the corners. Nanographene is a promising candidate to bring microelectronics down to the nano-scale and a likely substitute for silicon.

The electronic properties of the material depend greatly on its shape, size and above all, periphery—in other words how the edges are structured. A  periphery is particularly suitable—in this configuration, the electrons, which act as charge carriers, are more mobile than in other edge structures. This means that using pieces of zigzag-shaped graphene in nanoelectronics components may allow higher frequencies for switches.

Materials scientists who want to research only zigzag nanographene confront the problem that this form makes the compounds unstable and difficult to produce in a controlled manner. This is a prerequisite, however, if the  are to be investigated in detail.

Researchers led by Dr. Konstantin Amsharov from the Chair of Organic Chemistry II have now succeeded in doing just that. Their research has now been published in Nature Communications. Not only have they discovered a straightforward method for synthesising zigzag nanographene, their procedure delivers a yield of close to 100 percent and is suitable for large scale production. They have already produced a technically relevant quantity in the laboratory.

Researchers wild about zigzags
The much sought-after zigzag pattern can be found either in staggered rows of honeycombs (blue and purple) or four-limbed stars surrounding a central point of four graphene honeycombs (red and green). Credit: FAU/Konstantin Amsharov

The researchers first produced preliminary molecules, which they then fit together in a honeycomb formation over several cycles in a process known as cyclisation. In the end, graphene fragments are produced from staggered rows of honeycombs or four-limbed stars surrounding a central point of four graphene honeycombs, with the sought-after zigzag pattern at the edges. The product crystallises directly, even during synthesis. In their , the molecules are not in contact with oxygen. In solution, however, oxidation causes the structures to disintegrate quickly.

This approach allows scientists to produce large pieces of graphene, while maintaining control over their shape and periphery. This breakthrough in  research means that scientists should soon be able to produce and research a variety of interesting  structures, a crucial step toward using the material in nanoelectronic components.

 Explore further: Holey graphene as Holy Grail alternative to silicon chips

More information: Dominik Lungerich et al, Dehydrative π-extension to nanographenes with zig-zag edges, Nature Communications (2018). DOI: 10.1038/s41467-018-07095-z

 

U of Manchester – Nobel-prize Winning Chemistry for Clean Energy Breakthrough used to Reduce the cost of Fuel Cells used in Renewable Energy Vehicles – Reduce harmful emissions from ICE’s


nobelenergynanoparticlesCredit: CC0 Public Domain

Scientists have used a Nobel-prize winning chemistry technique on a mixture of metals to potentially reduce the cost of fuel cells used in electric cars and reduce harmful emissions from conventional vehicles.

The researchers have translated a biological , which won the 2017 Nobel Chemistry Prize, to reveal atomic scale chemistry in metal . These materials are one of the most effective catalysts for energy converting systems such as fuel cells. It is the first time this technique has been for this kind of research.

The particles have a complex star-shaped geometry and this new work shows that the edges and corners can have different chemistries which can now be tuned to reduce the cost of batteries and catalytic convertors.

The 2017 Nobel Prize in Chemistry was awarded to Joachim Frank, Richard Henderson and Jacques Dubochet for their role in pioneering the technique of single particle reconstruction. This electron microscopy technique has revealed the structures of a huge number of viruses and proteins but is not usually used for metals.

Now, a team at the University of Manchester, in collaboration with researchers at the University of Oxford and Macquarie University, have built upon the Nobel Prize winning technique to produce three dimensional elemental maps of metallic nanoparticles consisting of just a few thousand atoms.

Published in the journal Nano Letters, their research demonstrates that it is possible to map different elements at the nanometre scale in three dimensions, circumventing damage to the particles being studied.

Metal nanoparticles are the primary component in many catalysts, such as those used to convert toxic gases in car exhausts. Their effectiveness is highly dependent on their structure and chemistry, but because of their incredibly small structure,  are required in order to provide image them. However, most imaging is limited to 2-D projections.

“We have been investigating the use of tomography in the electron microscope to map elemental distributions in three dimensions for some time,” said Professor Sarah Haigh, from the School of Materials, University of Manchester. “We usually rotate the particle and take images from all directions, like a CT scan in a hospital, but these particles were damaging too quickly to enable a 3-D image to be built up. Biologists use a different approach for 3-D imaging and we decided to explore whether this could be used together with spectroscopic techniques to map the different elements inside the nanoparticles.”

“Like ‘single particle reconstruction’ the technique works by imaging many particles and assuming that they are all identical in structure, but arranged at different orientations relative to the electron beam. The images are then fed in to a computer algorithm which outputs a three dimensional reconstruction.”

In the present study the new 3-D chemical imaging method has been used to investigate platinum-nickel (Pt-Ni) metal nanoparticles.

Lead author, Yi-Chi Wang, also from the School of Materials, added: “Platinum based nanoparticles are one of the most effective and widely used catalytic materials in applications such as fuel cells and batteries. Our new insights about the 3-D local chemical distribution could help researchers to design better catalysts that are low-cost and high-efficiency.”

“We are aiming to automate our 3-D chemical reconstruction workflow in the future”, added author Dr. Thomas Slater.”We hope it can provide a fast and reliable method of imaging nanoparticle populations which is urgently needed to speed up optimisation of nanoparticle synthesis for wide ranging applications including biomedical sensing, light emitting diodes, and solar cells.”

 Explore further: Video: The 2017 Nobel Prize in Chemistry: Cryo-electron microscopy explained

More information: Yi-Chi Wang et al. Imaging Three-Dimensional Elemental Inhomogeneity in Pt–Ni Nanoparticles Using Spectroscopic Single Particle Reconstruction, Nano Letters (2019). DOI: 10.1021/acs.nanolett.8b03768

 

Harnessing Renewable Energy – Dongguk University Develops a Powerful New Catalyst Process for Electrolysis – Converting Renewable Energy Sources into Chemical Energy


apowerfulcat
An international collaboration of Scientists at Dongguk University developed a novel nickel-based hydroxide compound that can be used as a powerful catalyst for the electrolysis of water. This material could also be useful for developing …more

Finding and improving renewable energy sources is becoming increasingly important. One strategy to generate energy is breaking water molecules (H2O) apart in an electrochemical reaction known as electrolysis.

This process allows us to convert energy from the sun or other renewable sources into chemical energy. However, electrochemically splitting water molecules requires an overpotential—an excess voltage that has to be applied in addition to the theoretical voltage (1.23V vs. reversible hydrogen electrode or RHE) so that the necessary reactions can occur.

Electrocatalysts are materials that, because of their electrical and morphological features, facilitate electrochemical processes. Researchers have been searching for electrocatalysts that can aid in the electrolysis of water, and some of the best catalysts are noble-metal oxides, which are rare and costly. Nickel-based hydroxide (Ni(OH)2) compounds are, fortunately, a better alternative.

the kims dongguk u 1_wyb8mqvyqopjj-qli01iiaA team of scientists, including Profs. Hyunsik Im and Hyungsang Kim from Dongguk University, intercalated polyoxovanadate (POV) nanoclusters into Ni(OH)2 arranged in ordered layers and found that doing this improves its conducting and morphological properties, which in turn enhances its .

28307_dongguk_uni_2.rev.1490815256

 File Photo: Dongguk University

They employed a promising method called chemical solution growth (CSG), wherein a highly saturated solution is prepared, and the desired material structure naturally forms as the solutes precipitate in a predictable and controlled fashion, creating a layer-by-layer structure with POV nanoclusters intercalated between the Ni(OH)2 layers.

The team demonstrated that the resulting house-of-cards-like structure greatly reduced the overpotential needed for the electrolysis of water. They attributed this to the morphological features of this material; the POV nanoclusters increase the spacing between the Ni(OH)2 layers and induce the formation of micropores, which increases the surface area of the final material and the number of catalytic sites where  can be split. “Our results demonstrate the advantages of the CSG method for optimizing the pore structure of the resulting material,” explains Prof. Im.

Facilitating the electrolysis of  using novel catalysts is a step toward achieving a greener future. What’s more, the CSG method could be useful in many other fields. “The facile CSG deposition of nanohybrid  may be useful for applications such as the production of Li-ion batteries and biosensors,” states Prof. Kim. Only time will tell what new uses CSG will find.

 Explore further: Defects in nanoparticles help to drive the production of hydrogen, a clean-burning fuel

More information: Jayavant L. Gunjakar et al, Two-Dimensional Layered Hydroxide Nanoporous Nanohybrids Pillared with Zero-Dimensional Polyoxovanadate Nanoclusters for Enhanced Water Oxidation Catalysis, Small (2018). DOI: 10.1002/smll.201703481

Journal reference: Small search and more info website

Provided by: Dongguk University

 

Vanderbilt U – New nanoparticle targets tumor-infiltrating immune cells – Then ‘flips the switch’ to tell them to ‘Start Fighting’


immuneotherapy 1 d3e24406-8035-4b93-94d6d64a442d830f_source
Immune cells (green and red) surround and prepare to destroy a cancer cell (blue, center) Credit: Alex Ritter, Jennifer Lippincott Schwartz and Gillian Griffiths, National Institutes of Health

 

A team of Vanderbilt University bioengineers today announced a major breakthrough in penetrating the cells inside tumors and flipping on a switch that tells them to start fighting.

Immunotherapy’s promise in the fight against cancer drew international attention after two scientists won a Nobel Prize this year for unleashing the ability of the immune system to eliminate tumor cells.

But their approach, which keeps cancer cells from shutting off the immune system’s powerful T-cells before they can fight tumors, is just one way to use the body’s natural defenses against deadly disease.

 

A team of Vanderbilt University bioengineers today announced a major breakthrough in another: penetrating tumor-infiltrating immune cells and flipping on a switch that tells them to start fighting. The team designed a nanoscale particle to do that and found early success using it on human melanoma tissue.

“Tumors are pretty conniving and have evolved many ways to evade detection from our immune system,” said John T. Wilson, assistant professor of chemical and biomolecular engineering and biomedical engineering. “Our goal is to rearm the immune system with the tools it needs to destroy cancer cells.

“Checkpoint blockade has been a major breakthrough, but despite the huge impact it continues to have, we also know that there are a lot of patients who don’t respond to these therapies. We’ve developed a nanoparticle to find tumors and deliver a specific type of molecule that’s produced naturally by our bodies to fight off cancer.” 2018-immunotherapy-generic-banner-3

That molecule is called cGAMP, and it’s the primary way to switch on what’s known as the stimulator of interferon genes (STING) pathway: a natural mechanism the body uses to mount an immune response that can fight viruses or bacteria or clear out malignant cells. Wilson said his team’s nanoparticle delivers cGAMP in a way that jump-starts the immune response inside the tumor, resulting in the generation of T-cells that can destroy the tumor from the inside and also improve responses to checkpoint blockade.

While the Vanderbilt team’s research focused on melanoma, their work also indicates that this could impact treatment of many cancers, Wilson said, including breast, kidney, head and neck, neuroblastoma, colorectal and lung cancer.

His findings appear today in a paper titled “Endosomolytic Polymersomes Increase the Activity of Cyclic Dinucleotide STING Agonists to Enhance Cancer Immunotherapy” in the journal Nature Nanotechnology.

Daniel Shae, a Ph.D. student on Wilson’s team and first author of the manuscript, said the process began with developing the right nanoparticle, built using “smart” polymers that respond to changes in pH that he engineered to enhance the potency of cGAMP. After 20 or so iterations, the team found one that could deliver cGAMP and activate STING efficiently in mouse immune cells, then mouse tumors and eventually human tissue samples.

“That’s really exciting because it demonstrates that, one day, this technology may have success in patients,” Shae said.

Story Source:

Materials provided by Vanderbilt University. Original written by Heidi Nieland Hall. Note: Content may be edited for style and length.


Journal Reference:

  1. Daniel Shae, Kyle W. Becker, Plamen Christov, Dong Soo Yun, Abigail K. R. Lytton-Jean, Sema Sevimli, Manuel Ascano, Mark Kelley, Douglas B. Johnson, Justin M. Balko, John T. Wilson. Endosomolytic polymersomes increase the activity of cyclic dinucleotide STING agonists to enhance cancer immunotherapyNature Nanotechnology, 2019; DOI: 10.1038/s41565-018-0342-5

MIT News – Continuing Progress toward Practical Fusion Energy – “The MIT Fusion Landscape”


mit-fusion-landscape-01_0

Dennis Whyte, director of the Plasma Science and Fusion Center. Images: Gretchen Ertl

In series of talks, researchers describe major effort to address climate change through carbon-free power.

A year after announcing a major public-private collaboration to design a fusion reactor capable of producing more power than it consumes, researchers from MIT and the startup company Commonwealth Fusion Systems on Tuesday presented the MIT community with an update on their progress. In a series of talks, they detailed the effort’s continuing work to bring about practical fusion power — based on the reaction that provides the sun’s energy — on a faster timescale than any previous efforts.

At the event, titled “The MIT Fusion Landscape,” speakers explained why fusion power is urgently needed, and described the approach MIT and CFS are taking and how the project is taking shape. According to Dennis Whyte, head of MIT’s Plasma Science and Fusion Center (PSFC), the new project’s aim is “to try to get to fusion energy a lot faster,” by creating a prototype fusion device with a net power output within the next 15 years. This timeframe is necessary to address “the greatest challenge we have now, which is climate change.”

“Humanity is standing on the edge of a precipice right now,” warned Kerry Emanuel, the Cecil and Ida Green Professor in Earth and Planetary Sciences, who studies the impacts climate change will have on the intensity and frequency of hurricanes and other storms. Because of the existential threat posed by climate change, it is crucial to develop every possible source of carbon-free energy, and fusion power has the potential to be a major part of the solution, he said.

Emanuel countered the claims by some skeptics who say that climate has always been changing, pointing out that human civilization has developed during the last several thousand years, which has been a period of exceptional climate stability. While global sea level rose by 400 feet at the end of the last ice age, he said, that was a time when humans were essentially nomads. “A 1-meter change today, in either direction, would be very problematic for humanity,” he said, adding that expected changes in rainfall patterns could have serious impacts on access to water and food.

Only three large countries have successfully shifted their economies away from fossil fuels, he said: Sweden, Belgium, and France. And all of those did so largely on the strength of hydropower and nuclear power — and did so in only about 15 years. “We’re going to have to do whatever works,” he said, and while conventional fission-based nuclear power may be essential in the near term, in the longer term fusion power could be key to weaning the world from fossil fuels.

Andrew Lo, the Charles E. and Susan T. Professor of Economics at MIT’s Sloan School of Management, said that for large projects such as the development of practical fusion power plants, new kinds of funding mechanisms may be needed, as conventional venture capitalists and other traditional sources may not be sufficient to meet their costs. “We need to get the narrative right,” he said, to make it clear to people that investments will be needed to meet the challenge. “We need to make fusion real,” which means something on the order of a billion dollars of investment in various potential approaches, to maximize odds of success, Lo said.

Katie Rae, executive director of The Engine, a program founded by MIT and designed to help spinoff companies bridge the gap between lab and commercial success, explained how that organization’s directors quickly came to unanimous agreement that the fusion project, aimed at developing a demonstration fusion device called SPARC, was worthy of the maximum investment to help bring about its transformative goals. The Engine aims to help projects whose development doesn’t fit into the 10-year expectation for a financial return that is typical of venture capital funds. Such projects require more long-range thinking — up to 18 years, in the case of the SPARC project. The goals of the project, she said, aligned perfectly with the reasons The Engine was created. “It is so central to why we exist,” she said.

Anne White, a nuclear physicist at the PSFC and the Cecil and Ida Green Associate Professor in Nuclear Engineering, explained why the SPARC concept is important for moving the field of fusion to a path that can lead directly to commercial power production. As soon as the team’s demonstration device proves that it is possible to produce more power than the device consumes — a milestone never yet achieved by a fusion device — “the narrative changes at that moment. We’ll know we are almost there,” she said.

But getting to that point has always been a daunting challenge. “It was a bit too expensive and the device was a bit too big” to move forward, until the last few years when advances in superconducting magnet technology made it possible to create more powerful magnets that could enable a smaller fusion power plant to deliver an amount of power that would have required a larger power plant with previous technology. That’s what made the new SPARC project possible, White explained.

Bob Mumgaard, who is CEO of the MIT spinoff company CFS, described the next steps the team is taking: to design and make the large superconducting magnet assemblies needed for a working fusion demonstration device. The company, which currently has 30 employees but is growing fast, is in the process of “building the strongest magnets we can build,” which in turn may find applications in other industries even as the group makes progress toward fusion power. He said within two years they should have full-scale magnets up and running.

CFS and the MIT effort are far from alone, though, Mumgaard said. There are about 20 companies actively involved in such fusion research. “This is a vibrant, evolving system,” he said. Rather than a static landscape, he said, “there’s a lot of interplay — it’s more of an ecosystem.” And MIT and CFS, with their innovative approach to designing a compact, lower-cost power plant architecture that can be built faster and more efficiently, “have changed the narrative already in that ecosystem, and that is a very exciting thing.”

 

Columbia University: Unlocking Graphene’s Superconducting Powers with a ‘Magic Angle’ (a twist) and Compression (a squeeze)


unlocking graphene unlockinggra
Pictured: Applying pressure to twisted bilayer graphene pushes the layer together, and transforms the material from a metal to a superconductor. Credit: Ella Maru Studio

A Columbia-led team has discovered a new method to manipulate the electrical conductivity of this game-changing material, the strongest known to man with applications ranging from nano-electronic devices to clean energy.

Graphene has been heralded as a wonder material. Not only is it the strongest, thinnest material ever discovered, its exceptional ability to conduct heat and electricity paves the way for innovation in areas ranging from electronics to energy to medicine.

Now, a Columbia University-led team has developed a new method to finely tune adjacent layers of graphene—lacy, honeycomb-like sheets of carbon atoms—to induce superconductivity. Their research provides new insights into the physics underlying this two-dimensional material’s intriguing characteristics.

The team’s paper is published in the Jan. 24 issue of Science.

“Our work demonstrates new ways to induce superconductivity in twisted , in particular, achieved by applying pressure,” said Cory Dean, assistant professor of physics at Columbia and the study’s principal investigator. “It also provides critical first confirmation of last year’s MIT results—that bilayer graphene can exhibit electronic properties when twisted at an angle—and furthers our understanding of the system, which is extremely important for this new field of research.”

In March 2018 researchers at the Massachusetts Institute of Technology reported a groundbreaking discovery that two graphene layers can conduct electricity without resistance when the twist angle between them is 1.1 degrees, referred to as the “magic angle.”

But hitting that magic angle has proven difficult. “The layers must be twisted to within roughly a tenth of a degree around 1.1, which is experimentally challenging,” Dean said. “We found that very small errors in alignment could give entirely different results.”

So Dean and his colleagues, who include scientists from the National Institute for Materials Science and the University of California, Santa Barbara, set out to test whether magic-angle conditions could be achieved at bigger rotations.

“Rather than trying to precisely control the angle, we asked whether we could instead vary the spacing between the layers,” said Matthew Yankowitz, a postdoctoral research scientist in Columbia’s physics department and first author on the study. “In this way any twist angle could, in principle, be turned into a magic angle.”

They studied a sample with twist angle of 1.3 degrees—only slightly larger than the magic angle but still far enough away to preclude superconductivity.

Applying pressure transformed the material from a metal into either an insulator—in which electricity cannot flow—or a superconductor—where electrical current can pass without resistance—depending on the number of electrons in the material.

“Remarkably, by applying pressure of over 10,000 atmospheres we observe the emergence of the insulating and superconducting phases,” Dean said. Additionally, the superconductivity develops at the highest temperature observed in graphene so far, just over 3 degrees above absolute zero.”

To reach the high pressures needed to induce superconductivity the team worked closely with the National High Magnetic Field user facility, known as the Maglab, in Tallahassee, Florida.

“This effort was a huge technical challenge,” said Dean. “After fabricating one of most unique devices we’ve ever worked with, we then had to combine cryogenic temperatures, high magnetic fields, and high pressure—all while measuring electrical response. Putting this all together was a daunting task and our ability to make it work is really a tribute to the fantastic expertise at the Maglab.”

The researchers believe it may be possible to enhance the critical temperature of the superconductivity further at even higher pressures. The ultimate goal is to one day develop a superconductor which can perform under room temperature conditions, and although this may prove challenging in graphene, it could serve as a roadmap for achieving this goal in other materials.

Andrea Young, assistant professor of physics at UC Santa Barbara, a collaborator on the study, said the work clearly demonstrates that squeezing the layers has same effect as twisting them and offers an alternative paradigm for manipulating the electronic properties in .

“Our findings significantly relax the constraints that make it challenging to study the system and gives us new knobs to control it,” Young said.

Dean and Young are now twisting and squeezing a variety of atomically-thin  in the hopes of finding superconductivity emerging in other two-dimensional systems.

“Understanding ‘why’ any of this is happening is a formidable challenge but critical for eventually harnessing the power of this material—and our work starts unraveling the mystery,'” Dean said.

 Explore further: Twisted electronics open the door to tunable 2-D materials

More information: “Tuning superconductivity in twisted bilayer graphene” Science (2019). science.sciencemag.org/lookup/ … 1126/science.aav1910