10 Predictions for the Solar and Storage Market in the 2020s


Rooftop_Solar_Community_Austin_Texas_Shutterstock_XL_721_420_80_s_c1Branding and reputation will be increasingly important in the energy storage market.

All-in-one systems will be the new normal

1. Lots of storage

Batteries will be incentivized or mandated for practically every new solar PV system across the U.S. by 2025. As more homeowners and businesses deploy PV systems to reduce their electricity bills and ensure backup power, simple net metering will increasingly be replaced by time-of-use rates and other billing mechanisms that aim to align power prices with utility costs. We already see these trends in California and several states in the Northeast.

 

Solar systems with batteries are going to be about twice as expensive as traditional grid-direct installations, so in that sense, we will see actual costs increase as the mix shifts toward batteries. But while system costs will go up, we need to be careful to parse the actual equipment and soft costs from the consumer’s cost net of tax credits and incentives. Equipment costs for batteries and other hardware are generally flat to slightly down.

3. More battery and inverter packages from the same brand

Since the battery represents the dominant cost in an energy storage system (ESS), inverter companies will increasingly offer branded batteries. In turn, inverter companies packaging third-party batteries will eventually make way for savvy battery companies that can package the whole system.

4. Energy storage systems treated like heat pumps and air conditioners

California’s new Title 21 requirements make solar PV systems standard issue, and we can expect a future update to do the same for energy storage. By then, builders will be able to choose the ESS line they want to work with, and the whole process will look almost exactly like it does for home mechanical appliances like water heaters and HVAC systems. The only question will be whether the ESS is packaged with solar panels or kept separate.

Standards will evolve

5. Reputation will matter — a lot

The lack of meaningful industry metrics in energy storage creates an environment where branding and reputation become important, since users have little information beyond messaging and word of mouth. Long-term, this will create a barrier to entry for new battery startups, so expect fewer total players once a handful of brands emerge as high-confidence choices.

6. New safety standards and code requirements catch up to technology

Last October, the National Fire Protection Association published the first edition of the NFPA 855 code, which establishes an industrywide safety standard for energy storage systems. Test standards, including UL 9540, and UL 9540A, as well as building and electrical codes, such as the National Electrical Code (NEC/NFPA 70), International Residential Code and International Fire Code, are already being updated to harmonize with NFPA 855. The upshot is that kilowatt-hour capacity limits, siting and protective equipment requirements are becoming standardized and more accessible for both installers and inspectors to understand and apply.

All things will remain technical

7. Real automation and optimization software will outpace flashy interfaces

Third-party owners have specific PV fleet-management needs and often have proprietary software that their ESS needs to interface with daily. IEEE 2030.5 and related standards will help facilitate this need. Local installers have little in the way of hard requirements, but they and their customers will expect systems to be easy to install and operate.

In the long term, we’ll see real automation and optimization rather than the data-palooza common today. Many interfaces report too much data, and simplifying systems to hide irrelevant data will be necessary to avoid alienating the more mainstream consumers.

8. Still waiting for vehicle-to-grid

While V2G is not primarily a technical challenge, some manufacturers like Nissan and Honda have made significant headway. The challenge is more procedural than technical. V2G applications will take off when vehicle manufacturers and interface providers come to terms with how and when an electric vehicle’s battery is used for grid services or backup and how that impacts the EV’s warranty.

There’s also a consumer confidence problem to overcome, especially for those relying solely on their EV for transportation. We’re more likely to see “second-life” EV batteries repackaged for stationary storage — which is much easier to manage than trying to use the battery in the car.

9. AC and DC coupling will both be around for the foreseeable future

Given the latest National Electrical Code requirements for rapid shutdown, as well as the fact that module-level systems (e.g., Enphase and SolarEdge) represent the majority of installed systems, AC coupling is the clear choice for existing system owners to add batteries.

AC coupling will enjoy at least a temporary boom in popularity as people with existing PV systems seek to add storage. However, most advantages of AC coupling are for retrofits, and the majority of new systems will enjoy lower costs and better performance via DC coupling. DC coupling is arguably going to become more dominant once the PV-only retrofit market is saturated.

10. Battery pack voltage will increase dramatically

A century of lead-acid battery dominance has entrenched 48 volts (DC) as the standard battery system voltage. Systems with voltages up to 1,000 VDC are deployed using standard lead-acid cells, but it is only practical for engineered commercial and industrial or utility systems.

The Ohm’s law tradeoff between current and voltage pushed the EV industry, which needs to reduce weight and cost everywhere it can, to quickly migrate to high-voltage battery packs using 3- to 4-VDC lithium-ion cells. Similarly, the stationary energy storage industry is adopting higher-voltage battery packs to reduce the cost of battery inverters. Since conductor losses increase and decrease exponentially with current, higher battery voltages also enable better system efficiency.

The decade of the 2020s will ring in the age of mass solar-plus-storage solution deployment, allowing businesses and residents to tap into renewables more efficiently, protect against outages, save money and live more sustainably.

*** Re-Posted from Green-Tech Media

Scientists create solar panel by combining protein and quantum dots


2-sunCredit: CC0 Public Domain

Scientists at the National Research Nuclear University MEPhI (Russia) have created a new type of solar panel based on hybrid material consisting of quantum dots (QDs) and photosensitive protein. The creators believe that it has great potential for solar energy and optical computing.

The results of the MEPhI study were published in Biosensors and Bioelectronics.

Archaeal proteins of unicellular organisms, , can convert the energy of light into the energy of chemical bonds (like chlorophyll in plants). This occurs due to the transfer of a positive charge through the . Bacteriorhodopsin acts as a , which makes it a ready-to-use natural element of the solar panel.

A key difference between bacteriorhodopsin and chlorophyll is its ability to operate without oxygen, allowing the archaea to live in very aggressive environments like the depths of the Dead Sea. This ability has evolutionarily led to their high chemical, thermal, and optical stability. At the same time, by pumping protons, bacteriorhodopsin changes color many times in a billionth of a second. This is why it is a promising material for creating holographic processing units.

Scientists of MEPhI have been able to significantly improve the properties of bacteriorhodopsin by binding it to quantum dots (QDs)—semiconductor nanoparticles capable of concentrating  on a scale of just a few nanometers and transmitting it to bacteriorhodopsin without emitting light.

“We have created a highly efficient, operating photosensitive cell that generates electrical current by converting light under very low photon excitation. Under normal conditions, such a cell doesn’t work because photosensitive molecules such as bacteriorhodopsin effectively absorb light only in a very narrow energy range. But quantum dots do this in a very wide range and can even convert two lower-energy photons into one high-energy photon as if stacking them,” a researcher at MEPhI and one of the authors of the study, Viktor Krivenkov said.

According to the researcher, creating conditions for the radiation of high-energy photon, a quantum dot may not radiate it but rather transmit it to bacteriorhodopsin. Thus, MEPhI scientists have engineered a cell capable of operating under the irradiation from the near-infrared to the ultraviolet regions of the optical spectrum.

“We use an interdisciplinary approach at the intersection of chemistry, biology, particle physics and photonics. Quantum dots are produced using chemical synthesis methods, then they are coated with molecules that make their surface simultaneously biocompatible and charged, after which they are bound to the surface of the archean bacteriorhodopsin -containing purple membranes of Halobacterium salinarum. As a result, we have obtained hybrid complexes with very high (about 80%) efficiency of excitation  transfer from  to bacteriorhodopsin,” the leading scientist of the MEPhI Nano-Bioengineering Laboratory, Igor Nabiev said.

According to the researchers, the obtained results show the potential for creating highly effective photosensitive elements based on biostructures. They may be used, not only to provide , but also in optical computing.

The authors emphasized the very high quality of the bio-hybrid nanostructured material and the prospect of surpassing the best commercial samples with a possible increase in efficiency by a substantial margin. The next goal of the research team in this direction is to optimize the structure of the photosensitive cell.


Explore further

Protein changes precede photoisomerization of retinal chromophore


More information: Victor Krivenkov et al. Remarkably enhanced photoelectrical efficiency of bacteriorhodopsin in quantum dot – Purple membrane complexes under two-photon excitation, Biosensors and Bioelectronics (2019). DOI: 10.1016/j.bios.2019.05.009

Journal information: Biosensors and Bioelectronics

Chemists could make ‘smart glass’ smarter by manipulating it at the nanoscale: Colorado State University


Smart glass 190604131210_1_540x360

Chemists have devised a potentially major improvement to both the speed and durability of smart glass by providing a better understanding of how the glass works at the nanoscale.

An alternative nanoscale design for eco-friendly smart glass

Source: Colorado State University
“Smart glass,” an energy-efficiency product found in newer windows of cars, buildings and airplanes, slowly changes between transparent and tinted at the flip of a switch.

“Slowly” is the operative word; typical smart glass takes several minutes to reach its darkened state, and many cycles between light and dark tend to degrade the tinting quality over time. Colorado State University chemists have devised a potentially major improvement to both the speed and durability of smart glass by providing a better understanding of how the glass works at the nanoscale.

They offer an alternative nanoscale design for smart glass in new research published June 3 in Proceedings of the National Academy of Sciences. The project started as a grant-writing exercise for graduate student and first author R. Colby Evans, whose idea — and passion for the chemistry of color-changing materials — turned into an experiment involving two types of microscopy and enlisting several collaborators. Evans is advised by Justin Sambur, assistant professor in the Department of Chemistry, who is the paper’s senior author.

The smart glass that Evans and colleagues studied is “electrochromic,” which works by using a voltage to drive lithium ions into and out of thin, clear films of a material called tungsten oxide. “You can think of it as a battery you can see through,” Evans said. Typical tungsten-oxide smart glass panels take 7-12 minutes to transition between clear and tinted.

The researchers specifically studied electrochromic tungsten-oxide nanoparticles, which are 100 times smaller than the width of a human hair. Their experiments revealed that single nanoparticles, by themselves, tint four times faster than films of the same nanoparticles. That’s because interfaces between nanoparticles trap lithium ions, slowing down tinting behavior. Over time, these ion traps also degrade the material’s performance.

To support their claims, the researchers used bright field transmission microscopy to observe how tungsten-oxide nanoparticles absorb and scatter light. Making sample “smart glass,” they varied how much nanoparticle material they placed in their samples and watched how the tinting behaviors changed as more and more nanoparticles came into contact with each other. They then used scanning electron microscopy to obtain higher-resolution images of the length, width and spacing of the nanoparticles, so they could tell, for example, how many particles were clustered together, and how many were spread apart.

Based on their experimental findings, the authors proposed that the performance of smart glass could be improved by making a nanoparticle-based material with optimally spaced particles, to avoid ion-trapping interfaces.

Their imaging technique offers a new method for correlating nanoparticle structure and electrochromic properties; improvement of smart window performance is just one application that could result. Their approach could also guide applied research in batteries, fuel cells, capacitors and sensors.

“Thanks to Colby’s work, we have developed a new way to study chemical reactions in nanoparticles, and I expect that we will leverage this new tool to study underlying processes in a wide range of important energy technologies,” Sambur said.

The paper’s co-authors include Austin Ellingworth, a former Research Experience for Undergraduates student from Winona State University; Christina Cashen, a CSU chemistry graduate student; and Christopher R. Weinberger, a professor in CSU’s Department of Mechanical Engineering

Story Source:

Materials provided by Colorado State University. Original written by Anne Manning. Note: Content may be edited for style and length.


Journal Reference:

  1. R. Colby Evans, Austin Ellingworth, Christina J. Cashen, Christopher R. Weinberger, Justin B. Sambur. Influence of single-nanoparticle electrochromic dynamics on the durability and speed of smart windowsProceedings of the National Academy of Sciences, 2019; 201822007 DOI: 10.1073/pnas.1822007116

 

Colorado State University. “Chemists could make ‘smart glass’ smarter by manipulating it at the nanoscale: An alternative nanoscale design for eco-friendly smart glass.” ScienceDaily. ScienceDaily, 4 June 2019. <www.sciencedaily.com/releases/2019/06/190604131210.htm>.

Engineers at Rice University boost output of solar desalination system by 50%


1-hotspotsincr
Concentrating the sunlight on tiny spots on the heat-generating membrane exploits an inherent and previously unrecognized nonlinear relationship between photothermal heating and vapor pressure. Credit: Pratiksha Dongare/Rice University

Rice University’s solar-powered approach for purifying salt water with sunlight and nanoparticles is even more efficient than its creators first believed.

Researchers in Rice’s Laboratory for Nanophotonics (LANP) this week showed they could boost the efficiency of their solar-powered desalination system by more than 50% simply by adding inexpensive plastic lenses to concentrate sunlight into “hot spots.” The results are available online in the Proceedings of the National Academy of Sciences.

“The typical way to boost performance in solar-driven systems is to add solar concentrators and bring in more light,” said Pratiksha Dongare, a graduate student in applied physics at Rice’s Brown School of Engineering and co-lead author of the paper. “The big difference here is that we’re using the same amount of light. We’ve shown it’s possible to inexpensively redistribute that power and dramatically increase the rate of purified  production.”

In conventional membrane distillation, hot, salty water is flowed across one side of a sheetlike membrane while cool, filtered water flows across the other. The temperature difference creates a difference in  that drives water vapor from the heated side through the membrane toward the cooler, lower-pressure side. Scaling up the technology is difficult because the  across the membrane—and the resulting output of clean water—decreases as the size of the membrane increases. Rice’s “nanophotonics-enabled solar membrane distillation” (NESMD) technology addresses this by using light-absorbing nanoparticles to turn the membrane itself into a solar-driven .

'Hot spots' increase efficiency of solar desalination

Rice University researchers (from left) Pratiksha Dongare, Alessandro Alabastri and Oara Neumann showed that Rice’s ‘nanophotonics-enabled solar membrane distillation’ (NESMD) system was more efficient when the size of the device was scaled up and light was concentrated in ‘hot spots.’ Credit: Jeff Fitlow/Rice University

Dongare and colleagues, including study co-lead author Alessandro Alabastri, coat the top layer of their membranes with low-cost, commercially available nanoparticles that are designed to convert more than 80% of sunlight energy into heat. The solar-driven nanoparticle heating reduces production costs, and Rice engineers are working to scale up the technology for applications in  that have no access to electricity.

The concept and particles used in NESMD were first demonstrated in 2012 by LANP director Naomi Halas and research scientist Oara Neumann, who are both co-authors on the new study. In this week’s study, Halas, Dongare, Alabastri, Neumann and LANP physicist Peter Nordlander found they could exploit an inherent and previously unrecognized nonlinear relationship between incident light intensity and vapor pressure.

Alabastri, a physicist and Texas Instruments Research Assistant Professor in Rice’s Department of Electrical and Computer Engineering, used a simple mathematical example to describe the difference between a linear and nonlinear relationship. “If you take any two numbers that equal 10—seven and three, five and five, six and four—you will always get 10 if you add them together. But if the process is nonlinear, you might square them or even cube them before adding. So if we have nine and one, that would be nine squared, or 81, plus one squared, which equals 82. That is far better than 10, which is the best you can do with a linear relationship.”

In the case of NESMD, the nonlinear improvement comes from concentrating sunlight into tiny spots, much like a child might with a magnifying glass on a sunny day. Concentrating the light on a tiny spot on the membrane results in a linear increase in heat, but the heating, in turn, produces a nonlinear increase in vapor pressure. And the increased pressure forces more purified steam through the membrane in less time.

'Hot spots' increase efficiency of solar desalination
Researchers from Rice University’s Laboratory for Nanophotonics found they could boost the efficiency of their solar-powered desalination system by more than 50% by adding inexpensive plastic lenses to concentrate sunlight into “hot spots.” . Credit: Pratiksha Dongare/Rice University

“We showed that it’s always better to have more photons in a smaller area than to have a homogeneous distribution of photons across the entire ,” Alabastri said.

Halas, a chemist and engineer who’s spent more than 25 years pioneering the use of light-activated nanomaterials, said, “The efficiencies provided by this nonlinear optical process are important because water scarcity is a daily reality for about half of the world’s people, and efficient solar distillation could change that.

“Beyond water purification, this nonlinear optical effect also could improve technologies that use solar heating to drive chemical processes like photocatalysis,” Halas said.

For example, LANP is developing a copper-based nanoparticle for converting ammonia into hydrogen fuel at ambient pressure.

Halas is the Stanley C. Moore Professor of Electrical and Computer Engineering, director of Rice’s Smalley-Curl Institute and a professor of chemistry, bioengineering, physics and astronomy, and materials science and nanoengineering.

NESMD is in development at the Rice-based Center for Nanotechnology Enabled Water Treatment (NEWT) and won research and development funding from the Department of Energy’s Solar Desalination program in 2018.


Explore further

Freshwater from salt water using only solar energy: Modular, off-grid desalination technology


More information: Pratiksha D. Dongare et al, Solar thermal desalination as a nonlinear optical process, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1905311116

Provided by Rice University

If Solar And Wind Are So Cheap, Why Are They Making Electricity So Expensive?


oppenheim2016-w640h426_mid

Over the last year, the media have published story after story after story about the declining price of solar panels and wind turbines.

People who read these stories are understandably left with the impression that the more solar and wind energy we produce, the lower electricity prices will become.

And yet that’s not what’s happening. In fact, it’s the opposite.

Between 2009 and 2017, the price of solar panels per watt declined by 75 percent while the price of wind turbines per watt declined by 50 percent.

And yet — during the same period — the price of electricity in places that deployed significant quantities of renewables increased dramatically.

Electricity prices increased by:

 

 

What gives? If solar panels and wind turbines became so much cheaper, why did the price of electricity rise instead of decline?

uncaptioned image

Electricity prices increased by 51 percent in Germany during its expansion of solar and wind energy. EP

One hypothesis might be that while electricity from solar and wind became cheaper, other energy sources like coal, nuclear, and natural gas became more expensive, eliminating any savings, and raising the overall price of electricity.

But, again, that’s not what happened.

The price of natural gas declined by 72 percent in the U.S. between 2009 and 2016 due to the fracking revolution. In Europe, natural gas prices dropped by a little less than half over the same period.

The price of nuclear and coal in those place during the same period was mostly flat.

uncaptioned image

Electricity prices increased 24 percent in California during its solar energy build-out from 2011 to 2017. EP

Another hypothesis might be that the closure of nuclear plants resulted in higher energy prices.

Evidence for this hypothesis comes from the fact that nuclear energy leaders Illinois, France, Sweden and South Korea enjoy some of the cheapest electricity in the world.

Since 2010, California closed one nuclear plant (2,140 MW installed capacity) while Germany closed 5 nuclear plants and 4 other reactors at currently-operating plants (10,980 MW in total).

Electricity in Illinois is 42 percent cheaper than electricity in California while electricity in France is 45 percent cheaper than electricity in Germany.

But this hypothesis is undermined by the fact that the price of the main replacement fuels, natural gas and coal, remained low, despite increased demand for those two fuels in California and Germany.

That leaves us with solar and wind as the key suspects behind higher electricity prices. But why would cheaper solar panels and wind turbines make electricity more expensive?

The main reason appears to have been predicted by a young German economist in 2013.

In a paper for Energy Policy, Leon Hirth estimated that the economic value of wind and solar would decline significantly as they become a larger part of electricity supply.

The reason? Their fundamentally unreliable nature. Both solar and wind produce too much energy when societies don’t need it, and not enough when they do.

Solar and wind thus require that natural gas plants, hydro-electric dams, batteries or some other form of reliable power be ready at a moment’s notice to start churning out electricity when the wind stops blowing and the sun stops shining.

And unreliability requires solar- and/or wind-heavy places like Germany, California and Denmark to pay neighboring nations or states to take their solar and wind energy when they are producing too much of it.

Hirth predicted that the economic value of wind on the European grid would decline 40 percent once it becomes 30 percent of electricity while the value of solar would drop by 50 percent when it got to just 15 percent.

uncaptioned image

Hirth predicted that the economic value of wind would decline 40% once it reached 30% of electricity, and that the value of solar would drop by 50% when it reached 15% of electricity. EP

In 2017, the share of electricity coming from wind and solar was 53 percent in Denmark, 26 percent in Germany, and 23 percent in California. Denmark and Germany have the first and second most expensive electricity in Europe.

By reporting on the declining costs of solar panels and wind turbines but not on how they increase electricity prices, journalists are — intentionally or unintentionally — misleading policymakers and the public about those two technologies.

The Los Angeles Times last year reported that California’s electricity prices were rising, but failed to connect the price rise to renewables, provoking a sharp rebuttal from UC Berkeley economist James Bushnell.

“The story of how California’s electric system got to its current state is a long and gory one,” Bushnell wrote, but “the dominant policy driver in the electricity sector has unquestionably been a focus on developing renewable sources of electricity generation.”

'He's our power hitter - but only on sunny days.'

 

Part of the problem is that many reporters don’t understand electricity. They think of electricity as a commodity when it is, in fact, a service — like eating at a restaurant.

“The price we pay for the luxury of eating out isn’t just the cost of the ingredients most of which which, like solar panels and wind turbines, have declined for decades.

Rather, the price of services like eating out and electricity reflect the cost not only of a few ingredients but also their preparation and delivery.

This is a problem of bias, not just energy illiteracy. Normally skeptical journalists routinely give renewables a pass.

The reason isn’t because they don’t know how to report critically on energy — they do regularly when it comes to non-renewable energy sources — but rather because they don’t want to.”

That could — and should — change. Reporters have an obligation to report accurately and fairly on all issues they cover, especially ones as important as energy and the environment.

A good start would be for them to investigate why, if solar and wind are so cheap, they are making electricity so expensive.

Article Re-Posted from Forbes Michael Shellenberger, 

MIT Study: Adding power choices reduces cost and risk of carbon-free electricity


52377624-renewable-energy-sources-vector-infographics-solar-wind-tidal-hydroelectric-geothermal-power-biofuel

New MIT research shows that, unless steady, continuous carbon-free sources of electricity are included in the mix, costs of decarbonizing the electrical system could be prohibitive and end up derailing attempts to mitigate the most severe effects of global climate change. Image: Chelsea Turner

To curb greenhouse gas emissions, nations, states, and cities should aim for a mix of fuel-saving, flexible, and highly reliable sources.

In major legislation passed at the end of August, California committed to creating a 100 percent carbon-free electricity grid — once again leading other nations, states, and cities in setting aggressive policies for slashing greenhouse gas emissions. Now, a study by MIT researchers provides guidelines for cost-effective and reliable ways to build such a zero-carbon electricity system.

MIT-Energy-Mix-01_0The best way to tackle emissions from electricity, the study finds, is to use the most inclusive mix of low-carbon electricity sources.

Costs have declined rapidly for wind power, solar power, and energy storage batteries in recent years, leading some researchers, politicians, and advocates to suggest that these sources alone can power a carbon-free grid. But the new study finds that across a wide range of scenarios and locations, pairing these sources with steady carbon-free resources that can be counted on to meet demand in all seasons and over long periods — such as nuclear, geothermal, bioenergy, and natural gas with carbon capture — is a less costly and lower-risk route to a carbon-free grid.

The new findings are described in a paper published today in the journal Joule, by MIT doctoral student Nestor Sepulveda, Jesse Jenkins PhD ’18, Fernando de Sisternes PhD ’14, and professor of nuclear science and engineering and Associate Provost Richard Lester.

The need for cost effectiveness

“In this paper, we’re looking for robust strategies to get us to a zero-carbon electricity supply, which is the linchpin in overall efforts to mitigate climate change risk across the economy,” Jenkins says. To achieve that, “we need not only to get to zero emissions in the electricity sector, but we also have to do so at a low enough cost that electricity is an attractive substitute for oil, natural gas, and coal in the transportation, heat, and industrial sectors, where decarbonization is typically even more challenging than in electricity. ”

Sepulveda also emphasizes the importance of cost-effective paths to carbon-free electricity, adding that in today’s world, “we have so many problems, and climate change is a very complex and important one, but not the only one. So every extra dollar we spend addressing climate change is also another dollar we can’t use to tackle other pressing societal problems, such as eliminating poverty or disease.” Thus, it’s important for research not only to identify technically achievable options to decarbonize electricity, but also to find ways to achieve carbon reductions at the most reasonable possible cost.

To evaluate the costs of different strategies for deep decarbonization of electricity generation, the team looked at nearly 1,000 different scenarios involving different assumptions about the availability and cost of low-carbon technologies, geographical variations in the availability of renewable resources, and different policies on their use.

Regarding the policies, the team compared two different approaches. The “restrictive” approach permitted only the use of solar and wind generation plus battery storage, augmented by measures to reduce and shift the timing of demand for electricity, as well as long-distance transmission lines to help smooth out local and regional variations. The  “inclusive” approach used all of those technologies but also permitted the option of using  continual carbon-free sources, such as nuclear power, bioenergy, and natural gas with a system for capturing and storing carbon emissions. Under every case the team studied, the broader mix of sources was found to be more affordable.

The cost savings of the more inclusive approach relative to the more restricted case were substantial. Including continual, or “firm,” low-carbon resources in a zero-carbon resource mix lowered costs anywhere from 10 percent to as much as 62 percent, across the many scenarios analyzed. That’s important to know, the authors stress, because in many cases existing and proposed regulations and economic incentives favor, or even mandate, a more restricted range of energy resources.

“The results of this research challenge what has become conventional wisdom on both sides of the climate change debate,” Lester says. “Contrary to fears that effective climate mitigation efforts will be cripplingly expensive, our work shows that even deep decarbonization of the electric power sector is achievable at relatively modest additional cost. But contrary to beliefs that carbon-free electricity can be generated easily and cheaply with wind, solar energy, and storage batteries alone, our analysis makes clear that the societal cost of achieving deep decarbonization that way will likely be far more expensive than is necessary.”

Light bulb RE images

A new taxonomy for electricity sources

In looking at options for new power generation in different scenarios, the team found that the traditional way of describing different types of power sources in the electrical industry — “baseload,” “load following,” and “peaking” resources — is outdated and no longer useful, given the way new resources are being used.

Rather, they suggest, it’s more appropriate to think of power sources in three new categories: “fuel-saving” resources, which include solar, wind and run-of-the-river (that is, without dams) hydropower; “fast-burst” resources, providing rapid but short-duration responses to fluctuations in electricity demand and supply, including battery storage and technologies and pricing strategies to enhance the responsiveness of demand; and “firm” resources, such as nuclear, hydro with large reservoirs, biogas, and geothermal.

“Because we can’t know with certainty the future cost and availability of many of these resources,” Sepulveda notes, “the cases studied covered a wide range of possibilities, in order to make the overall conclusions of the study robust across that range of uncertainties.”

Range of scenarios

The group used a range of projections, made by agencies such as the National Renewable Energy Laboratory, as to the expected costs of different power sources over the coming decades, including costs similar to today’s and anticipated cost reductions as new or improved systems are developed and brought online. For each technology, the researchers chose a projected mid-range cost, along with a low-end and high-end cost estimate, and then studied many combinations of these possible future costs.

Under every scenario, cases that were restricted to using fuel-saving and fast-burst technologies had a higher overall cost of electricity than cases using firm low-carbon sources as well, “even with the most optimistic set of assumptions about future cost reductions,” Sepulveda says.

That’s true, Jenkins adds, “even when we assume, for example, that nuclear remains as expensive as it is today, and wind and solar and batteries get much cheaper.”

The authors also found that across all of the wind-solar-batteries-only cases, the cost of electricity rises rapidly as systems move toward zero emissions, but when firm power sources are also available, electricity costs increase much more gradually as emissions decline to zero.

“If we decide to pursue decarbonization primarily with wind, solar, and batteries,” Jenkins says, “we are effectively ‘going all in’ and betting the planet on achieving very low costs for all of these resources,” as well as the ability to build out continental-scale  high-voltage transmission lines and to induce much more flexible electricity demand.

In contrast, “an electricity system that uses firm low-carbon resources together with solar, wind, and storage can achieve zero emissions with only modest increases in cost even under pessimistic assumptions about how cheap these carbon-free resources become or our ability to unlock flexible demand or expand the grid,” says Jenkins. This shows how the addition of firm low-carbon resources “is an effective hedging strategy that reduces both the cost and risk” for fully decarbonizing power systems, he says.

Even though a fully carbon-free electricity supply is years away in most regions, it is important to do this analysis today, Sepulveda says, because decisions made now about power plant construction, research investments, or climate policies have impacts that can last for decades.

“If we don’t start now” in developing and deploying the widest range of carbon-free alternatives, he says, “that could substantially reduce the likelihood of getting to zero emissions.”

David Victor, a professor of international relations at the University of California at San Diego, who was not involved in this study, says, “After decades of ignoring the problem of climate change, finally policymakers are grappling with how they might make deep cuts in emissions. This new paper in Joule shows that deep decarbonization must include a big role for reliable, firm sources of electric power. The study, one of the few rigorous numerical analyses of how the grid might actually operate with low-emission technologies, offers some sobering news for policymakers who think they can decarbonize the economy with wind and solar alone.”

The research received support from the MIT Energy Initiative, the Martin Family Trust, and the Chilean Navy.

New Material For Splitting Water: Halide double Perovskites – “All the Right Properties” for creating Fuel Cells


Water Splitting 173343_web

MIT: Novel methods of synthesizing quantum dot materials – promising materials for high performance in electronic and optical devices


QD 3-novelmethodsThese images show scanning electron micrographs of the researchers’ sample quantum dot films. The dark spots are the individual quantum dots, each about 5 nanometers in diameter. Images a and b show the consistent size and alignment of the …more

For quantum dot (QD) materials to perform well in devices such as solar cells, the nanoscale crystals in them need to pack together tightly so that electrons can hop easily from one dot to the next and flow out as current. MIT researchers have now made QD films in which the dots vary by just one atom in diameter and are organized into solid lattices with unprecedented order. Subsequent processing pulls the QDs in the film closer together, further easing the electrons’ pathway. Tests using an ultrafast laser confirm that the energy levels of vacancies in adjacent QDs are so similar that hopping electrons don’t get stuck in low-energy dots along the way.

Taken together, the results suggest a new direction for ongoing efforts to develop these promising materials for high performance in electronic and optical devices.

In recent decades, much research attention has focused on electronic materials made of , which are tiny crystals of semiconducting materials a few nanometers in diameter. After three decades of research, QDs are now being used in TV displays, where they emit bright light in vivid colors that can be fine-tuned by changing the sizes of the nanoparticles. But many opportunities remain for taking advantage of these remarkable materials.

“QDs are a really promising underlying materials technology for  applications,” says William Tisdale, the ARCO Career Development Professor in Energy Studies and an associate professor of chemical engineering.

QD materials pique his interest for several reasons. QDs are easily synthesized in a solvent at low temperatures using standard procedures. The QD-bearing solvent can then be deposited on a surface—small or large, rigid or flexible—and as it dries, the QDs are left behind as a solid. Best of all, the electronic and optical properties of that solid can be controlled by tuning the QDs.

“With QDs, you have all these degrees of freedom,” says Tisdale. “You can change their composition, size, shape, and surface chemistry to fabricate a material that’s tailored for your application.”

The ability to adjust electron behavior to suit specific devices is of particular interest. For example, in solar photovoltaics (PVs), electrons should pick up energy from sunlight and then move rapidly through the material and out as current before they lose their excess energy. In light-emitting diodes (LEDs), high-energy “excited” electrons should relax on cue, emitting their extra energy as light.

With thermoelectric (TE) devices, QD materials could be a game-changer. When TE materials are hotter on one side than the other, they generate electricity. So TE devices could turn waste heat in car engines, industrial equipment, and other sources into power—without combustion or moving parts. The TE effect has been known for a century, but devices using TE materials have remained inefficient. The problem: While those materials conduct electricity well, they also conduct heat well, so the temperatures of the two ends of a device quickly equalize. In most materials, measures to decrease heat flow also decrease electron flow.

“With QDs, we can control those two properties separately,” says Tisdale. “So we can simultaneously engineer our material so it’s good at transferring electrical charge but bad at transporting heat.”

Making good arrays

One challenge in working with QDs has been to make particles that are all the same size and shape. During QD synthesis, quadrillions of nanocrystals are deposited onto a surface, where they self-assemble in an orderly fashion as they dry. If the individual QDs aren’t all exactly the same, they can’t pack together tightly, and electrons won’t move easily from one nanocrystal to the next.

Three years ago, a team in Tisdale’s lab led by Mark Weidman Ph.D. ’16 demonstrated a way to reduce that structural disorder. In a series of experiments with lead-sulfide QDs, team members found that carefully selecting the ratio between the lead and sulfur in the starting materials would produce QDs of uniform size.

“As those nanocrystals dry, they self-assemble into a beautifully ordered arrangement we call a superlattice,” Tisdale says.

Novel methods of synthesizing quantum dot materials
As shown in these schematics, at the center of a quantum dot is a core of a semiconducting material. Radiating outward from that core are arms, or ligands, of an organic material. The ligands keep the quantum dots in solution from sticking …more

Scattering electron microscope images of those superlattices taken from several angles show lined-up, 5-nanometer-diameter nanocrystals throughout the samples and confirm the long-range ordering of the QDs.

For a closer examination of their materials, Weidman performed a series of X-ray scattering experiments at the National Synchrotron Light Source at Brookhaven National Laboratory. Data from those experiments showed both how the QDs are positioned relative to one another and how they’re oriented, that is, whether they’re all facing the same way. The results confirmed that QDs in the superlattices are well ordered and essentially all the same.

“On average, the difference in diameter between one nanocrystal and another was less than the size of one more atom added to the surface,” says Tisdale. “So these QDs have unprecedented monodispersity, and they exhibit structural behavior that we hadn’t seen previously because no one could make QDs this monodisperse.”

Controlling electron hopping

The researchers next focused on how to tailor their monodisperse QD materials for efficient transfer of electrical current. “In a PV or TE device made of QDs, the electrons need to be able to hop effortlessly from one dot to the next and then do that many thousands of times as they make their way to the metal electrode,” Tisdale explains.

One way to influence hopping is by controlling the spacing from one QD to the next. A single QD consists of a core of semiconducting material—in this work, lead sulfide—with chemically bound arms, or ligands, made of organic (carbon-containing) molecules radiating outward. The ligands play a critical role—without them, as the QDs form in solution, they’d stick together and drop out as a solid clump. Once the QD layer is dry, the ligands end up as solid spacers that determine how far apart the nanocrystals are.

A standard ligand material used in QD synthesis is . Given the length of an oleic acid ligand, the QDs in the dry superlattice end up about 2.6 nanometers apart—and that’s a problem.

“That may sound like a small distance, but it’s not,” says Tisdale. “It’s way too big for a hopping electron to get across.”

Using shorter ligands in the starting solution would reduce that distance, but they wouldn’t keep the QDs from sticking together when they’re in solution. “So we needed to swap out the long oleic acid ligands in our solid materials for something shorter” after the film formed, Tisdale says.

To achieve that replacement, the researchers use a process called ligand exchange. First, they prepare a mixture of a shorter ligand and an organic solvent that will dissolve oleic acid but not the lead sulfide QDs. They then submerge the QD film in that mixture for 24 hours. During that time, the oleic acid ligands dissolve, and the new, shorter ligands take their place, pulling the QDs closer together. The solvent and oleic acid are then rinsed off.

Tests with various ligands confirmed their impact on interparticle spacing. Depending on the length of the selected ligand, the researchers could reduce that spacing from the original 2.6 nanometers with oleic acid all the way down to 0.4 nanometers. However, while the resulting films have beautifully ordered regions—perfect for fundamental studies—inserting the shorter ligands tends to generate cracks as the overall volume of the QD sample shrinks.

Energetic alignment of nanocrystals

One result of that work came as a surprise: Ligands known to yield high performance in lead-sulfide-based solar cells didn’t produce the shortest interparticle spacing in their tests.

Novel methods of synthesizing quantum dot materials
These graphs show electron energy measurements in a standard quantum dot film (top) and in a film made from monodisperse quantum dots (bottom). In each graph, the data points show energy measurements at initial excitation — indicated by the …more

“Reducing that spacing to get good conductivity is necessary,” says Tisdale. “But there may be other aspects of our QD material that we need to optimize to facilitate electron transfer.”

One possibility is a mismatch between the energy levels of the electrons in adjacent QDs. In any material, electrons exist at only two energy levels—a low ground state and a high excited state. If an electron in a QD film receives extra energy—say, from incoming sunlight—it can jump up to its excited state and move through the material until it finds a low-energy opening left behind by another traveling electron. It then drops down to its ground state, releasing its excess energy as heat or light.

In solid crystals, those two energy levels are a fixed characteristic of the material itself. But in QDs, they vary with particle size. Make a QD smaller and the energy level of its excited electrons increases. Again, variability in QD size can create problems. Once excited, a high-energy electron in a small QD will hop from dot to dot—until it comes to a large, low-energy QD.

“Excited electrons like going downhill more than they like going uphill, so they tend to hang out on the low-energy dots,” says Tisdale. “If there’s then a high-energy dot in the way, it takes them a long time to get past that bottleneck.”

So the greater mismatch between energy levels—called energetic disorder—the worse the electron mobility. To measure the impact of energetic disorder on electron flow in their samples, Rachel Gilmore Ph.D. ’17 and her collaborators used a technique called pump-probe spectroscopy—as far as they know, the first time this method has been used to study electron hopping in QDs.

QDs in an excited state absorb light differently than do those in the ground state, so shining light through a material and taking an absorption spectrum provides a measure of the electronic states in it. But in QD materials, electron hopping events can occur within picoseconds—10-12 of a second—which is faster than any electrical detector can measure.

The researchers therefore set up a special experiment using an ultrafast laser, whose beam is made up of quick pulses occurring at 100,000 per second. Their setup subdivides the laser beam such that a single pulse is split into a pump pulse that excites a sample and—after a delay measured in femtoseconds (10-15 seconds)—a corresponding probe pulse that measures the sample’s energy state after the delay. By gradually increasing the delay between the pump and probe pulses, they gather absorption spectra that show how much electron transfer has occurred and how quickly the excited electrons drop back to their ground state.

Using this technique, they measured electron energy in a QD sample with standard dot-to-dot variability and in one of the monodisperse samples. In the sample with standard variability, the excited electrons lose much of their excess energy within 3 nanoseconds. In the monodisperse sample, little energy is lost in the same time period—an indication that the energy levels of the QDs are all about the same.

By combining their spectroscopy results with computer simulations of the electron transport process, the researchers extracted electron hopping times ranging from 80 picoseconds for their smallest quantum dots to over 1 nanosecond for the largest ones. And they concluded that their QD materials are at the theoretical limit of how little energetic disorder is possible. Indeed, any difference in energy between neighboring QDs isn’t a problem. At room temperature, energy levels are always vibrating a bit, and those fluctuations are larger than the small differences from one QD to the next.

“So at some instant, random kicks in energy from the environment will cause the  of the QDs to line up, and the electron will do a quick hop,” says Tisdale.

The way forward

With energetic disorder no longer a concern, Tisdale concludes that further progress in making commercially viable QD  will require better ways of dealing with structural disorder. He and his team tested several methods of performing ligand exchange in solid samples, and none produced films with consistent QD size and spacing over large areas without cracks. As a result, he now believes that efforts to optimize that process “may not take us where we need to go.”

What’s needed instead is a way to put short ligands on the QDs when they’re in solution and then let them self-assemble into the desired structure.

“There are some emerging strategies for solution-phase ligand exchange,” he says. “If they’re successfully developed and combined with monodisperse QDs, we should be able to produce beautifully ordered, large-area structures well suited for devices such as solar cells, LEDs, and thermoelectric systems.”

 Explore further: Extremely bright and fast light emission

More information: Rachel H. Gilmore et al. Charge Carrier Hopping Dynamics in Homogeneously Broadened PbS Quantum Dot Solids, Nano Letters (2017). DOI: 10.1021/acs.nanolett.6b04201

Mark C. Weidman et al. Monodisperse, Air-Stable PbS Nanocrystals via Precursor Stoichiometry Control, ACS Nano (2014). DOI: 10.1021/nn5018654

Mark C. Weidman et al. Interparticle Spacing and Structural Ordering in Superlattice PbS Nanocrystal Solids Undergoing Ligand Exchange, Chemistry of Materials (2014). DOI: 10.1021/cm503626s

 

From Electric Vehicles – Micro Mobility and the NextGen ‘Green Revolution’ – Panasonic far from being ONLY a battery supplier: CES 2018 with (5) Videos


Panasonic is far from being satisfied with only a battery supplier role. The Japanese company has greater ambitions and intends to offer its scalable “ePowertrain” platform for small EVs.

The main target for the ePowertrain are EV bikes and micro EVs. These should now be easier to develop and produce using Panasonic’s power unit (with an on-board charger, junction box, inverter and DC-to-DC converter) and a motor unit. Of course, batteries are available too.

“Panasonic Corporation announced today that it has developed a scalable “ePowertrain” platform, a solution for the effective development of small electric vehicles (EVs). The platform is a systematized application of devices used in the EVs of major global carmakers, and is intended to contribute to the advancement of the coming mobility society.

Global demand for EVs is expected to expand rapidly, along with a wide variety of new mobility. These include not only conventional passenger vehicles but also new types of EVs, such as EV bikes and micro EVs, which suit various lifestyles and uses in each region.

The platform Panasonic has developed for EV bikes and micro EVs is an energy-efficient, safe powertrain that features integrated compactness, high efficiency, and flexible scalability. It consists of basic units, including a power unit (with an on-board charger, junction box, inverter and DC-to-DC converter) and a motor unit. The platform will help reduce costs and lead time for vehicle development by scaling up or down the combination of basic units in accordance with vehicle specifications such as size, speed and torque.

Panasonic has developed and delivered a wide range of components – including batteries, on-board chargers, film capacitors, DC-to-DC converters and relays – specifically for EVs, plug-in hybrids, and hybrid EVs. Panasonic will continue to contribute to the global growth in EVs through system development that makes use of the strengths of our devices.”

In the case of full-size cars, Panasonic is most known for its battery cells supplied to Tesla. The partnership was recently expanded to include solar cells.

Panasonic feels pretty independent from Tesla, stressing that it has its own battery factory “inside” the Tesla Gigafactory, however the cells were “jointly designed and engineered”.

Annual production of 35 GWh is expected in 2019.

Production of New Battery Cells for Tesla’s “Model 3”

Panasonic’s lithium-ion battery factory within Tesla’s Gigafactory handles production of 2170-size*1 cylindrical battery cells for Tesla’s energy storage system and its new “Model 3” sedan, which began production in July 2017. The high performance cylindrical “2170 cell” was jointly designed and engineered by Tesla and Panasonic to offer the best performance at the lowest production cost in an optimal form factor for both electric vehicles (EVs) and energy products. Panasonic and Tesla are conducting phased investment in the Gigafactory, which will have 35 GWh*/year production capacity of lithium-ion battery cells, more than was produced worldwide in 2013. Panasonic is estimating that global production volume for electric vehicles in fiscal 2026 will see an approximately six-fold increase from fiscal 2017 to over 3 million units. The Company will contribute to the realization of a sustainable energy society through the provision of electric vehicle batteries.

 

 

 

 

 

In regards to solar cells, Panasonic expects 1 GW output at the Tesla Gigafactory 2 in Buffalo, New York in 2019.

The solar cells are used both in conventional modules, as well as in Tesla Solar Roof tiles.

Strengthening Collaboration with Tesla

In addition to the collaboration with Tesla in the lithium-ion battery business (for details, refer to pages 5-6), Panasonic also collaborates with the company in the solar cell business and will begin production of solar cells this summer at its Buffalo, New York, factory. Solar cells produced at this factory are supplied to Tesla. In addition, the solar cells are used in roof tiles sold by Tesla, a product that integrates solar cells with roofing materials.Panasonic will continue its investment in the factory going forward and plans to raise solar cell production capacity to 1 GW by 2019.

Researchers make atoms-thick Post-It notes for solar cells and circuits: U of Chicago


23-scientistsmaSchematic diagram (left) and electron microscope image (right) of a stacked set of semiconductor films, made using the Park lab’s new technique. Credit: Park et. al./Nature

Over the past half-century, scientists have shaved silicon films down to just a wisp of atoms in pursuit of smaller, faster electronics. For the next set of breakthroughs, though, they’ll need novel ways to build even tinier and more powerful devices.

A study led by UChicago researchers, published Sept. 20 in Nature, describes an innovative method to make stacks of semiconductors just a few atoms thick. The technique offers scientists and engineers a simple, cost-effective method to make thin, uniform layers of these materials, which could expand capabilities for devices from solar cells to cell phones.

Stacking thin layers of materials offers a range of possibilities for making  with unique properties. But manufacturing such  is a delicate process, with little room for error.

“The scale of the problem we’re looking at is, imagine trying to lay down a flat sheet of plastic wrap the size of Chicago without getting any  in it,” said Jiwoong Park, a UChicago professor with the Department of Chemistry, the Institute for Molecular Engineering and the James Franck Institute, who led the study. “When the material itself is just atoms thick, every little stray atom is a problem.”

Today, these layers are “grown” instead of stacking them on top of one another. But that means the bottom layers have to be subjected to harsh growth conditions such as high temperatures while the new ones are added—a process that limits the materials with which to make them.

Park’s team instead made the films individually. Then they put them into a vacuum, peeled them off and stuck them to one another, like Post-It notes. This allowed the scientists to make films that were connected with weak bonds instead of stronger covalent bonds—interfering less with the perfect surfaces between the layers.

“The films, vertically controlled at the atomic-level, are exceptionally high-quality over entire wafers,” said Kibum Kang, a postdoctoral associate who was the first author of the study.

Kan-Heng Lee, a graduate student and co-first author of the study, then tested the films’ electrical properties by making them into devices and showed that their functions can be designed on the atomic scale, which could allow them to serve as the essential ingredient for future computer chips.

The method opens up a myriad of possibilities for such films. They can be made on top of water or plastics; they can be made to detach by dipping them into water; and they can be carved or patterned with an ion beam. Researchers are exploring the full range of what can be done with the method, which they said is simple and cost-effective.

“We expect this new  to accelerate the discovery of novel , as well as enabling large-scale manufacturing,” Park said.

 Explore further: A simple additive to improve film quality

More information: “Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures,” Kang et. al, Nature, Sept. 20. DOI: 10.1038/nature23905

 

%d bloggers like this: