Genesis Nanotech – ICYMI – Our Top 3 Blog Posts (as picked by you) This Week


#1

MIT Review: Borophene (not graphene) is the new wonder material that’s got everyone excited

#2

China made an artificial star that’s 6 times (6X) as hot as our sun … And it could be the future of energy

 

#3

Graphene Coating Could Help Prevent Lithium Battery Fires

 

Read/ Watch More …

Genesis Nanotech – Watch a Presentation Video on Our Current Project

Nano Enabled Batteries and Super Capacitors

Tenka Energy, Inc. Building Ultra-Thin Energy Dense SuperCaps and NexGen Nano-Enabled Pouch & Cylindrical Batteries – Energy Storage Made Small and POWERFUL!

 

 

 

Advertisements

MIT – 10 Technology Breakthroughs for 2019 Part II with Guest Curator – Bill Gates


MIT Nuclear 2 c-mod-internal-1

This is Part II of MIT’s  10 Technology Breakthroughs for 2019′ Re-Posted from MIT Technology Review, with Guest Curator Bill Gates. You can Read Part I Here

Part I Into from Bill Gates: How We’ll Invent the Future

was honored when MIT Technology Review invited me to be the first guest curator of its 10 Breakthrough Technologies. Narrowing down the list was difficult. I wanted to choose things that not only will create headlines in 2019 but captured this moment in technological history—which got me thinking about how innovation has evolved over time.

 

Robot dexterity

NICOLAS ORTEGA

  • Why it matters If robots could learn to deal with the messiness of the real world, they could do many more tasks.
  • Key Players OpenAI
    Carnegie Mellon University
    University of Michigan
    UC Berkeley
  • Availability 3-5 years

Robots are teaching themselves to handle the physical world.

For all the talk about machines taking jobs, industrial robots are still clumsy and inflexible. A robot can repeatedly pick up a component on an assembly line with amazing precision and without ever getting bored—but move the object half an inch, or replace it with something slightly different, and the machine will fumble ineptly or paw at thin air.

But while a robot can’t yet be programmed to figure out how to grasp any object just by looking at it, as people do, it can now learn to manipulate the object on its own through virtual trial and error.

One such project is Dactyl, a robot that taught itself to flip a toy building block in its fingers. Dactyl, which comes from the San Francisco nonprofit OpenAI, consists of an off-the-shelf robot hand surrounded by an array of lights and cameras. Using what’s known as reinforcement learning, neural-network software learns how to grasp and turn the block within a simulated environment before the hand tries it out for real. The software experiments, randomly at first, strengthening connections within the network over time as it gets closer to its goal.

It usually isn’t possible to transfer that type of virtual practice to the real world, because things like friction or the varied properties of different materials are so difficult to simulate. The OpenAI team got around this by adding randomness to the virtual training, giving the robot a proxy for the messiness of reality.

We’ll need further breakthroughs for robots to master the advanced dexterity needed in a real warehouse or factory. But if researchers can reliably employ this kind of learning, robots might eventually assemble our gadgets, load our dishwashers, and even help Grandma out of bed. —Will Knight

New-wave nuclear power

BOB MUMGAARD/PLASMA SCIENCE AND FUSION CENTER/MIT

Advanced fusion and fission reactors are edging closer to reality. 

New nuclear designs that have gained momentum in the past year are promising to make this power source safer and cheaper. Among them are generation IV fission reactors, an evolution of traditional designs; small modular reactors; and fusion reactors, a technology that has seemed eternally just out of reach. Developers of generation IV fission designs, such as Canada’s Terrestrial Energy and Washington-based TerraPower, have entered into R&D partnerships with utilities, aiming for grid supply (somewhat optimistically, maybe) by the 2020s.

Small modular reactors typically produce in the tens of megawatts of power (for comparison, a traditional nuclear reactor produces around 1,000 MW). Companies like Oregon’s NuScale say the miniaturized reactors can save money and reduce environmental and financial risks.

There has even been progress on fusion. Though no one expects delivery before 2030, companies like General Fusion and Commonwealth Fusion Systems, an MIT spinout, are making some headway. Many consider fusion a pipe dream, but because the reactors can’t melt down and don’t create long-lived, high-level waste, it should face much less public resistance than conventional nuclear. (Bill Gates is an investor in TerraPower and Commonwealth Fusion Systems.) —Leigh Phillips

NENOV | GETTY

Predicting preemies

  • Why it matters 15 million babies are born prematurely every year; it’s the leading cause of death for children under age five
  • Key player Akna Dx
  • Availability A test could be offered in doctor’s offices within five years

A simple blood test can predict if a pregnant woman is at risk of giving birth prematurely.

Our genetic material lives mostly inside our cells. But small amounts of “cell-free” DNA and RNA also float in our blood, often released by dying cells. In pregnant women, that cell-free material is an alphabet soup of nucleic acids from the fetus, the placenta, and the mother.

Stephen Quake, a bioengineer at Stanford, has found a way to use that to tackle one of medicine’s most intractable problems: the roughly one in 10 babies born prematurely.

Free-floating DNA and RNA can yield information that previously required invasive ways of grabbing cells, such as taking a biopsy of a tumor or puncturing a pregnant woman’s belly to perform an amniocentesis. What’s changed is that it’s now easier to detect and sequence the small amounts of cell-free genetic material in the blood. In the last few years researchers have begun developing blood tests for cancer (by spotting the telltale DNA from tumor cells) and for prenatal screening of conditions like Down syndrome.

The tests for these conditions rely on looking for genetic mutations in the DNA. RNA, on the other hand, is the molecule that regulates gene expression—how much of a protein is produced from a gene. By sequencing the free-floating RNA in the mother’s blood, Quake can spot fluctuations in the expression of seven genes that he singles out as associated with preterm birth. That lets him identify women likely to deliver too early. Once alerted, doctors can take measures to stave off an early birth and give the child a better chance of survival.

The technology behind the blood test, Quake says, is quick, easy, and less than $10 a measurement. He and his collaborators have launched a startup, Akna Dx, to commercialize it. —Bonnie Rochman

BRUCE PETERSON

Gut probe in a pill

Why it matters The device makes it easier to screen for and study gut diseases, including one that keeps millions of children in poor countries from growing properly

  • Key player Massachusetts General Hospital
  • Availability Now used in adults; testing in infants begins in 2019

A small, swallowable device captures detailed images of the gut without anesthesia, even in infants and children.

Environmental enteric dysfunction (EED) may be one of the costliest diseases you’ve never heard of. Marked by inflamed intestines that are leaky and absorb nutrients poorly, it’s widespread in poor countries and is one reason why many people there are malnourished, have developmental delays, and never reach a normal height. No one knows exactly what causes EED and how it could be prevented or treated.

Practical screening to detect it would help medical workers know when to intervene and how. Therapies are already available for infants, but diagnosing and studying illnesses in the guts of such young children often requires anesthetizing them and inserting a tube called an endoscope down the throat. It’s expensive, uncomfortable, and not practical in areas of the world where EED is prevalent.

So Guillermo Tearney, a pathologist and engineer at Massachusetts General Hospital (MGH) in Boston, is developing small devices that can be used to inspect the gut for signs of EED and even obtain tissue biopsies. Unlike endoscopes, they are simple to use at a primary care visit.

Tearney’s swallowable capsules contain miniature microscopes. They’re attached to a flexible string-like tether that provides power and light while sending images to a briefcase-like console with a monitor. This lets the health-care worker pause the capsule at points of interest and pull it out when finished, allowing it to be sterilized and reused. (Though it sounds gag-­inducing, Tearney’s team has developed a technique that they say doesn’t cause discomfort.) It can also carry technologies that image the entire surface of the digestive tract at the resolution of a single cell or capture three-dimensional cross sections a couple of millimeters deep.

The technology has several applications; at MGH it’s being used to screen for Barrett’s esophagus, a precursor of esophageal cancer. For EED, Tearney’s team has developed an even smaller version for use in infants who can’t swallow a pill. It’s been tested on adolescents in Pakistan, where EED is prevalent, and infant testing is planned for 2019.

The little probe will help researchers answer questions about EED’s development—such as which cells it affects and whether bacteria are involved—and evaluate interventions and potential treatments. —Courtney Humphrie

PAPER BOAT CREATIVE | GETTY

Custom cancer vaccines

  • Why it matters Conventional chemotherapies take a heavy toll on healthy cells and aren’t always effective against tumors
  • Key players BioNTech
    Genentech
  • Availability In human testing

The treatment incites the body’s natural defenses to destroy only cancer cells by identifying mutations unique to each tumor

Scientists are on the cusp of commercializing the first personalized cancer vaccine. If it works as hoped, the vaccine, which triggers a person’s immune system to identify a tumor by its unique mutations, could effectively shut down many types of cancers.

By using the body’s natural defenses to selectively destroy only tumor cells, the vaccine, unlike conventional chemotherapies, limits damage to healthy cells. The attacking immune cells could also be vigilant in spotting any stray cancer cells after the initial treatment.

The possibility of such vaccines began to take shape in 2008, five years after the Human Genome Project was completed, when geneticists published the first sequence of a cancerous tumor cell.

Soon after, investigators began to compare the DNA of tumor cells with that of healthy cells—and other tumor cells. These studies confirmed that all cancer cells contain hundreds if not thousands of specific mutations, most of which are unique to each tumor.

A few years later, a German startup called BioNTech provided compelling evidence that a vaccine containing copies of these mutations could catalyze the body’s immune system to produce T cells primed to seek out, attack, and destroy all cancer cells harboring them.

In December 2017, BioNTech began a large test of the vaccine in cancer patients, in collaboration with the biotech giant Genentech. The ongoing trial is targeting at least 10 solid cancers and aims to enroll upwards of 560 patients at sites around the globe.

The two companies are designing new manufacturing techniques to produce thousands of personally customized vaccines cheaply and quickly. That will be tricky because creating the vaccine involves performing a biopsy on the patient’s tumor, sequencing and analyzing its DNA, and rushing that information to the production site. Once produced, the vaccine needs to be promptly delivered to the hospital; delays could be deadly. —Adam Pior

BRUCE PETERSON/STYLING: MONICA MARIANO

The cow-free burger

  • Why it matters Livestock production causes catastrophic deforestation, water pollution, and greenhouse-gas emissions
  • Key players Beyond Meat
    Impossible Foods
  • Availability Plant-based now; lab-grown around 2020

Both lab-grown and plant-based alternatives approximate the taste and nutritional value of real meat without the environmental devastation.

The UN expects the world to have 9.8 billion people by 2050. And those people are getting richer. Neither trend bodes well for climate change—especially because as people escape poverty, they tend to eat more meat.

By that date, according to the predictions, humans will consume 70% more meat than they did in 2005. And it turns out that raising animals for human consumption is among the worst things we do to the environment.

Depending on the animal, producing a pound of meat protein with Western industrialized methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20 times more fossil fuels than producing a pound of plant protein.

The problem is that people aren’t likely to stop eating meat anytime soon. Which means lab-grown and plant-based alternatives might be the best way to limit the destruction.

Making lab-grown meat involves extracting muscle tissue from animals and growing it in bioreactors. The end product looks much like what you’d get from an animal, although researchers are still working on the taste. Researchers at Maastricht University in the Netherlands, who are working to produce lab-grown meat at scale, believe they’ll have a lab-grown burger available by next year. One drawback of lab-grown meat is that the environmental benefits are still sketchy at best—a recent World Economic Forum report says the emissions from lab-grown meat would be only around 7% less than emissions from beef production.

The better environmental case can be made for plant-based meats from companies like Beyond Meat and Impossible Foods (Bill Gates is an investor in both companies), which use pea proteins, soy, wheat, potatoes, and plant oils to mimic the texture and taste of animal meat.

Beyond Meat has a new 26,000-square-foot (2,400-square-meter) plant in California and has already sold upwards of 25 million burgers from 30,000 stores and restaurants. According to an analysis by the Center for Sustainable Systems at the University of Michigan, a Beyond Meat patty would probably generate 90% less in greenhouse-gas emissions than a conventional burger made from a cow. —Markkus Rovito

 

NICO ORTEGA

Carbon dioxide catcher

  • Why it matters Removing CO2 from the atmosphere might be one of the last viable ways to stop catastrophic climate change
  • Key players Carbon Engineering
    Climeworks
    Global Thermostat
  • Availability 5-10 years

 

Practical and affordable ways to capture carbon dioxide from the air can soak up excess greenhouse-gas emissions.

Even if we slow carbon dioxide emissions, the warming effect of the greenhouse gas can persist for thousands of years. To prevent a dangerous rise in temperatures, the UN’s climate panel now concludes, the world will need to remove as much as 1 trillion tons of carbon dioxide from the atmosphere this century.

In a surprise finding last summer, Harvard climate scientist David Keith calculated that machines could, in theory, pull this off for less than $100 a ton, through an approach known as direct air capture. That’s an order of magnitude cheaper than earlier estimates that led many scientists to dismiss the technology as far too expensive—though it will still take years for costs to fall to anywhere near that level.

But once you capture the carbon, you still need to figure out what to do with it.

Carbon Engineering, the Canadian startup Keith cofounded in 2009, plans to expand its pilot plant to ramp up production of its synthetic fuels, using the captured carbon dioxide as a key ingredient. (Bill Gates is an investor in Carbon Engineering.)

Zurich-based Climeworks’s direct air capture plant in Italy will produce methane from captured carbon dioxide and hydrogen, while a second plant in Switzerland will sell carbon dioxide to the soft-drinks industry. So will Global Thermostat of New York, which finished constructing its first commercial plant in Alabama last year.

Still, if it’s used in synthetic fuels or sodas, the carbon dioxide will mostly end up back in the atmosphere. The ultimate goal is to lock greenhouse gases away forever. Some could be nested within products like carbon fiber, polymers, or concrete, but far more will simply need to be buried underground, a costly job that no business model seems likely to support.

In fact, pulling CO2 out of the air is, from an engineering perspective, one of the most difficult and expensive ways of dealing with climate change. But given how slowly we’re reducing emissions, there are no good options left. —James Temple

BRUCE PETERSON

An ECG on your wrist

Regulatory approval and technological advances are making it easier for people to continuously monitor their hearts with wearable devices.

Fitness trackers aren’t serious medical devices. An intense workout or loose band can mess with the sensors that read your pulse. But an electrocardiogram—the kind doctors use to diagnose abnormalities before they cause a stroke or heart attack— requires a visit to a clinic, and people often fail to take the test in time.

ECG-enabled smart watches, made possible by new regulations and innovations in hardware and software, offer the convenience of a wearable device with something closer to the precision of a medical one.

An Apple Watch–compatible band from Silicon Valley startup AliveCor that can detect atrial fibrillation, a frequent cause of blood clots and stroke, received clearance from the FDA in 2017. Last year, Apple released its own FDA-cleared ECG feature, embedded in the watch itself.

The health-device company Withings also announced plans for an ECG-equipped watch shortly after.
Current wearables still employ only a single sensor, whereas a real ECG has 12. And no wearable can yet detect a heart attack as it’s happening.

But this might change soon. Last fall, AliveCor presented preliminary results to the American Heart Association on an app and two-­sensor system that can detect a certain type of heart attack. —Karen Hao

THEDMAN | GETTY

Sanitation without sewers

  • Why it matters 2.3 billion people lack safe sanitation, and many die as a result
  • Key players Duke University
    University of South Florida
    Biomass Controls
    California Institute of Technology
  • Availability 1-2 years

 

Energy-efficient toilets can operate without a sewer system and treat waste on the spot.

About 2.3 billion people don’t have good sanitation. The lack of proper toilets encourages people to dump fecal matter into nearby ponds and streams, spreading bacteria, viruses, and parasites that can cause diarrhea and cholera. Diarrhea causes one in nine child deaths worldwide.

Now researchers are working to build a new kind of toilet that’s cheap enough for the developing world and can not only dispose of waste but treat it as well.

In 2011 Bill Gates created what was essentially the X Prize in this area—the Reinvent the Toilet Challenge. Since the contest’s launch, several teams have put prototypes in the field. All process the waste locally, so there’s no need for large amounts of water to carry it to a distant treatment plant.

Most of the prototypes are self-contained and don’t need sewers, but they look like traditional toilets housed in small buildings or storage containers. The NEWgenerator toilet, designed at the University of South Florida, filters out pollutants with an anaerobic membrane, which has pores smaller than bacteria and viruses. Another project, from Connecticut-based Biomass Controls, is a refinery the size of a shipping container; it heats the waste to produce a carbon-rich material that can, among other things, fertilize soil.

One drawback is that the toilets don’t work at every scale. The Biomass Controls product, for example, is designed primarily for tens of thousands of users per day, which makes it less well suited for smaller villages. Another system, developed at Duke University, is meant to be used only by a few nearby homes.

So the challenge now is to make these toilets cheaper and more adaptable to communities of different sizes. “It’s great to build one or two units,” says Daniel Yeh, an associate professor at the University of South Florida, who led the NEWgenerator team. “But to really have the technology impact the world, the only way to do that is mass-produce the units.” —Erin Winick

BRUCE PETERSON

Smooth-talking AI assistants

  • Why it matters AI assistants can now perform conversation-based tasks like booking a restaurant reservation or coordinating a package drop-off rather than just obey simple commands
  • Key players Google
    Alibaba
    Amazon
  • Availability 1-2 years

 

New techniques that capture semantic relationships between words are making machines better at understanding natural language.

We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.

But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.

These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.

Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.

In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.

But while AI programs have gotten better at figuring out what you want, they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding. Once we cross that hurdle, we’ll see yet another evolution, perhaps from logistics coordinator to babysitter, teacher—or even friend? —Karen Hao

MIT’s 10 Breakthrough Technologies for 2019 – Introduction by Bill Gates: Part I


In this Two (2) Part Re-Post from MIT Technology Review 10 Breakthrough Technologies for 2019. Guest Curator Bill Gates has been asked to choose this year’s list of inventions that will change the world for the better.

Part I: Bill Gates: How we’ll Invent the Future

was honored when MIT Technology Review invited me to be the first guest curator of its 10 Breakthrough Technologies. Narrowing down the list was difficult. I wanted to choose things that not only will create headlines in 2019 but captured this moment in technological history—which got me thinking about how innovation has evolved over time.

My mind went to—of all things—the plow. Plows are an excellent embodiment of the history of innovation. Humans have been using them since 4000 BCE, when Mesopotamian farmers aerated soil with sharpened sticks. We’ve been slowly tinkering with and improving them ever since, and today’s plows are technological marvels.

 

But what exactly is the purpose of a plow? It’s a tool that creates more: more seeds planted, more crops harvested, more food to go around. In places where nutrition is hard to come by, it’s no exaggeration to say that a plow gives people more years of life. The plow—like many technologies, both ancient and modern—is about creating more of something and doing it more efficiently, so that more people can benefit.

Contrast that with lab-grown meat, one of the innovations I picked for this year’s 10 Breakthrough Technologies list. Growing animal protein in a lab isn’t about feeding more people. There’s enough livestock to feed the world already, even as demand for meat goes up. Next-generation protein isn’t about creating more—it’s about making meat better. It lets us provide for a growing and wealthier world without contributing to deforestation or emitting methane. It also allows us to enjoy hamburgers without killing any animals.

Put another way, the plow improves our quantity of life, and lab-grown meat improves our quality of life. For most of human history, we’ve put most of our innovative capacity into the former. And our efforts have paid off: worldwide life expectancy rose from 34 years in 1913 to 60 in 1973 and has reached 71 today.

Because we’re living longer, our focus is starting to shift toward well-being. This transformation is happening slowly. If you divide scientific breakthroughs into these two categories—things that improve quantity of life and things that improve quality of life—the 2009 list looks not so different from this year’s. Like most forms of progress, the change is so gradual that it’s hard to perceive. It’s a matter of decades, not years—and I believe we’re only at the midpoint of the transition.

To be clear, I don’t think humanity will stop trying to extend life spans anytime soon. We’re still far from a world where everyone everywhere lives to old age in perfect health, and it’s going to take a lot of innovation to get us there. Plus, “quantity of life” and “quality of life” are not mutually exclusive. A malaria vaccine would both save lives and make life better for children who might otherwise have been left with developmental delays from the disease.

We’ve reached a point where we’re tackling both ideas at once, and that’s what makes this moment in history so interesting. If I had to predict what this list will look like a few years from now, I’d bet technologies that alleviate chronic disease will be a big theme. This won’t just include new drugs (although I would love to see new treatments for diseases like Alzheimer’s on the list). The innovations might look like a mechanical glove that helps a person with arthritis maintain flexibility, or an app that connects people experiencing major depressive episodes with the help they need.

If we could look even further out—let’s say the list 20 years from now—I would hope to see technologies that center almost entirely on well-being. I think the brilliant minds of the future will focus on more metaphysical questions: How do we make people happier? How do we create meaningful connections? How do we help everyone live a fulfilling life?

I would love to see these questions shape the 2039 list, because it would mean that we’ve successfully fought back disease (and dealt with climate change). I can’t imagine a greater sign of progress than that. For now, though, the innovations driving change are a mix of things that extend life and things that make it better. My picks reflect both. Each one gives me a different reason to be optimistic for the future, and I hope they inspire you, too.

My selections include amazing new tools that will one day save lives, from simple blood tests that predict premature birth to toilets that destroy deadly pathogens. I’m equally excited by how other technologies on the list will improve our lives. Wearable health monitors like the wrist-based ECG will warn heart patients of impending problems, while others let diabetics not only track glucose levels but manage their disease. Advanced nuclear reactors could provide carbon-free, safe, secure energy to the world.

One of my choices even offers us a peek at a future where society’s primary goal is personal fulfillment. Among many other applications, AI-driven personal agents might one day make your e-mail in-box more manageable—something that sounds trivial until you consider what possibilities open up when you have more free time.

The 30 minutes you used to spend reading e-mail could be spent doing other things. I know some people would use that time to get more work done—but I hope most would use it for pursuits like connecting with a friend over coffee, helping your child with homework, or even volunteering in your community.

That, I think, is a future worth working toward.

MIT Nuclear 2 c-mod-internal-1

 

You can read Part II Here

If Solar And Wind Are So Cheap, Why Are They Making Electricity So Expensive?


oppenheim2016-w640h426_mid

Over the last year, the media have published story after story after story about the declining price of solar panels and wind turbines.

People who read these stories are understandably left with the impression that the more solar and wind energy we produce, the lower electricity prices will become.

And yet that’s not what’s happening. In fact, it’s the opposite.

Between 2009 and 2017, the price of solar panels per watt declined by 75 percent while the price of wind turbines per watt declined by 50 percent.

And yet — during the same period — the price of electricity in places that deployed significant quantities of renewables increased dramatically.

Electricity prices increased by:

 

 

What gives? If solar panels and wind turbines became so much cheaper, why did the price of electricity rise instead of decline?

uncaptioned image

Electricity prices increased by 51 percent in Germany during its expansion of solar and wind energy. EP

One hypothesis might be that while electricity from solar and wind became cheaper, other energy sources like coal, nuclear, and natural gas became more expensive, eliminating any savings, and raising the overall price of electricity.

But, again, that’s not what happened.

The price of natural gas declined by 72 percent in the U.S. between 2009 and 2016 due to the fracking revolution. In Europe, natural gas prices dropped by a little less than half over the same period.

The price of nuclear and coal in those place during the same period was mostly flat.

uncaptioned image

Electricity prices increased 24 percent in California during its solar energy build-out from 2011 to 2017. EP

Another hypothesis might be that the closure of nuclear plants resulted in higher energy prices.

Evidence for this hypothesis comes from the fact that nuclear energy leaders Illinois, France, Sweden and South Korea enjoy some of the cheapest electricity in the world.

Since 2010, California closed one nuclear plant (2,140 MW installed capacity) while Germany closed 5 nuclear plants and 4 other reactors at currently-operating plants (10,980 MW in total).

Electricity in Illinois is 42 percent cheaper than electricity in California while electricity in France is 45 percent cheaper than electricity in Germany.

But this hypothesis is undermined by the fact that the price of the main replacement fuels, natural gas and coal, remained low, despite increased demand for those two fuels in California and Germany.

That leaves us with solar and wind as the key suspects behind higher electricity prices. But why would cheaper solar panels and wind turbines make electricity more expensive?

The main reason appears to have been predicted by a young German economist in 2013.

In a paper for Energy Policy, Leon Hirth estimated that the economic value of wind and solar would decline significantly as they become a larger part of electricity supply.

The reason? Their fundamentally unreliable nature. Both solar and wind produce too much energy when societies don’t need it, and not enough when they do.

Solar and wind thus require that natural gas plants, hydro-electric dams, batteries or some other form of reliable power be ready at a moment’s notice to start churning out electricity when the wind stops blowing and the sun stops shining.

And unreliability requires solar- and/or wind-heavy places like Germany, California and Denmark to pay neighboring nations or states to take their solar and wind energy when they are producing too much of it.

Hirth predicted that the economic value of wind on the European grid would decline 40 percent once it becomes 30 percent of electricity while the value of solar would drop by 50 percent when it got to just 15 percent.

uncaptioned image

Hirth predicted that the economic value of wind would decline 40% once it reached 30% of electricity, and that the value of solar would drop by 50% when it reached 15% of electricity. EP

In 2017, the share of electricity coming from wind and solar was 53 percent in Denmark, 26 percent in Germany, and 23 percent in California. Denmark and Germany have the first and second most expensive electricity in Europe.

By reporting on the declining costs of solar panels and wind turbines but not on how they increase electricity prices, journalists are — intentionally or unintentionally — misleading policymakers and the public about those two technologies.

The Los Angeles Times last year reported that California’s electricity prices were rising, but failed to connect the price rise to renewables, provoking a sharp rebuttal from UC Berkeley economist James Bushnell.

“The story of how California’s electric system got to its current state is a long and gory one,” Bushnell wrote, but “the dominant policy driver in the electricity sector has unquestionably been a focus on developing renewable sources of electricity generation.”

'He's our power hitter - but only on sunny days.'

 

Part of the problem is that many reporters don’t understand electricity. They think of electricity as a commodity when it is, in fact, a service — like eating at a restaurant.

“The price we pay for the luxury of eating out isn’t just the cost of the ingredients most of which which, like solar panels and wind turbines, have declined for decades.

Rather, the price of services like eating out and electricity reflect the cost not only of a few ingredients but also their preparation and delivery.

This is a problem of bias, not just energy illiteracy. Normally skeptical journalists routinely give renewables a pass.

The reason isn’t because they don’t know how to report critically on energy — they do regularly when it comes to non-renewable energy sources — but rather because they don’t want to.”

That could — and should — change. Reporters have an obligation to report accurately and fairly on all issues they cover, especially ones as important as energy and the environment.

A good start would be for them to investigate why, if solar and wind are so cheap, they are making electricity so expensive.

Article Re-Posted from Forbes Michael Shellenberger, 

The Fourth Industrial Revolution: Leveraging Nanotechnology Applications In Manufacturing


BOLD Feature-Image-2

 

The Fourth Industrial Revolution has made for big strides in manufacturing, especially with the additions of robotics and 3D printing. But one field has been advancing the notion of thinking small. Nanotechnology, or the study and application of manipulating matter at the nanoscale, has uncovered the existence of a world that’s a thousand times smaller than a fly’s eye. It has also led to the development of materials and techniques that have enhanced production capabilities.

Nanotechnology continues to have a broad impact on different sectors. In fact, the worldwide market will likely exceed $125 billion by 2024. Ranging from stain-resistant fabric to more affordable solar cells, nanotechnology applications have been improving our daily lives. As research continues, advances in this space are opening up possibilities for more promising innovations.

A Closer Look at the Nanoscale

In the metric system, “nano” means a factor of one billionth—which means that a nanometer (nm) is at one-billionth of a meter. Forms of matter at the nanoscale usually have lengths from the atomic level of around 0.1 nm up to 100 nm.

What makes the nanoscale extraordinary is that the properties and characteristics of matter are different on this level. Some materials can become more efficient at conducting electricity or heat. Others reflect light better. There are also materials that become stronger. The list goes on. For instance, the metal copper on the nanoscale is transparent. Gold, which is normally unreactive, becomes chemically active. Carbon, which is soft in its usual form, becomes incredibly hard when packed into a nanoscopic arrangement called a “nanotube”. These characteristics are crucial for numerous nanotechnology applications.

a photo quote of Dr. K. Eric Drexler in relation to nanotechnology applications

Dr. K. Eric Drexler weighs in on the uses of nanotechnology and on understanding where nanotechnology will lead.

The reason why chemical properties alter in the nanoscale is that it’s easier for particles to move around and between one another. Additionally, gravity becomes much less important than the electromagnetic forces between atoms and molecules. Thermal vibrations also become extremely significant. In short, the rules of science are very different at the nanoscale. It’s one of the factors that make nanotechnology research and nanotechnology applications so fascinating.

Creating lighter, sturdier and safer materials are possible with nanotechnology. Many of those materials can also withstand great pressures and weights. Nanomaterials, or structures in the nanoscale, enable the advanced manufacturing of innovative, next-generation products that provide higher performance at a lower cost and improved sustainability.

Exploring the Nanotech Space, One Atom at a Time

A few well-known companies have been exploring the substantial profit potential of nanotechnology applications.

IBM has invested more than $3 billion for the development of semiconductors that will be seven nanometers or less. The company has also been exploring new nanomanufacturing techniques. Additionally, IBM holds the distinction of producing the world’s smallest and fastest graphene chip.

an image of a nanotechnology applications material amid nanotechnology research
Uses of nanotechnology in relation to metal-organic frameworks (MOFs) have cost-advantage production economics.

Samsung has also been active in nanotechnology research. The electronics giant has filed more than 400 patents related to graphene. Such patents involve manufacturing processes and touch screens, among other nanotechnology applications. Moreover, Samsung has funded an effort to develop its first generation graphene batteries.

future-of-aviation

Read More: Electric Aircraft And The Future Of Aviation

One of the notable startups that has been gaining traction in this space is NuMat Technologies. The company creates intelligently engineered systems through the integration of programmable nanomaterials. NuMat is also the first company in the world to commercialize products enabled by metal-organic frameworks (MOFs). These are nanomaterials with vast surface areas, highly tunable porosities, and near-infinite combinatorial possibilities. Nanotechnology applications of MOFs involve products with improved performance and otherwise-unachievable flexibility in form factors. Additionally, they have cost-advantage production economics.

Founder Benjamin Hernandez believes that one of the most important uses of nanotechnology is solving challenges related to sustainability.

“I think conceptually that’s kind of the wave of the future, using atomic-scale machines or engineering to solve complex macro problems,” Hernandez said.

Moreover, NuMat uses artificial intelligence to design MOFs. The company has total funding of $22.3 million so far. NuMat continues extensive research to develop more nanotechnology applications for the future.

a photo quote of Markus Antonietti in relation to nanotechnology applications
For something so small, it’s understandable that few fully grasp the uses of nanotechnology.

Making a Difference with Nanotechnology Research

The ones mentioned above are just a few of the thousand uses of nanotechnology. Achievements in the field seem to be announced almost daily. However, businesses must also place greater importance on using nanotechnology for more sustainable manufacturing. After all, advantages include reduced consumption of raw materials. Another benefit is the substitution of more abundant or less toxic materials than the ones presently used. Moreover, nanotechnology applications can lead to the development of cleaner and less wasteful manufacturing processes.

Professor Sijie Lin at Tongji University is optimistic about the prevalence of sustainability in nanotechnology applications.

“Designing safer nanomaterials and nanostructures has gained increasing attention in the field of nanoscience and technology in recent years,” Lin said. “Based on the body of experimental evidence contributed by environmental health and safety studies, materials scientists now have a better grasp on the relationships between the nanomaterials’ physicochemical characteristics and their hazard and safety profiles.”

According to Markus Antonietti, director of Max Planck Institute for Colloids and Interfaces at Max Planck Institute for Evolutionary Biology, more work needs to be done in increasing awareness on nanotechnology applications or uses of nanotechnology. “But there also needs to be a focus on education and getting information to the public at large,” he noted. “The best part is that all of this could happen immediately if we simply spread the information in an understandable way. People don’t read science journals, so they don’t even know that all of this is possible.”

Article Re-Posted from Bold Business

For more on Bold Business’ examination of the Fourth Industrial Revolution, check out these stories on 3D Printing and Supply-Chain Automation.

Cornell University: Pore size influences nature of complex nanostructures – Materials for energy storage, biochemical sensors and electronics


The mere presence of void or empty spaces in porous two-dimensional molecules and materials leads to markedly different van der Waals interactions across a range of distances. Credit: Yan Yang and Robert DiStasio

Building at the nanoscale is not like building a house. Scientists often start with two-dimensional molecular layers and combine them to form complex three-dimensional architectures.

And instead of nails and screws, these structures are joined together by the attractive van der Waals forces that exist between objects at the nanoscale.

Van der Waals forces are critical in constructing  for energy storage, biochemical sensors and electronics, although they are weak when compared to chemical bonds. They also play a crucial role in , determining which drugs bind to the active sites in proteins.

In new research that could help inform development of new materials, Cornell chemists have found that the empty space (“pores”) present in two-dimensional molecular building blocks fundamentally changes the strength of these van der Waals forces, and can potentially alter the assembly of sophisticated nanostructures.

The findings represent an unexplored avenue toward governing the self-assembly of complex nanostructures from porous two-dimensional building blocks.

“We hope that a more complete understanding of these forces will aid in the discovery and development of novel materials with diverse functionalities, targeted properties, and potentially novel applications,” said Robert A. DiStasio Jr., assistant professor of chemistry in the College of Arts and Sciences.

In a paper titled “Influence of Pore Size on the van der Waals Interaction in Two-Dimensional Molecules and Materials,” published Jan. 14 in Physical Review Letters, DiStasio, graduate student Yan Yang and postdoctoral associate Ka Un Lao describe a series of mathematical models that address the question of how void space fundamentally affects the attractive physical forces which occur over nanoscale distances.

In three prototypical model systems, the researchers found that particular pore sizes lead to unexpected behavior in the  that govern van der Waals forces.

Further, they write, this behavior “can be tuned by varying the relative size and shape of these void spaces … [providing] new insight into the self-assembly and design of complex nanostructures.”

While strong covalent bonds are responsible for the formation of two-dimensional molecular layers, van der Waals interactions provide the main attractive  between the layers. As such, van der Waals forces are largely responsible for the self-assembly of the complex three-dimensional nanostructures that make up many of the advanced materials in use today.

The researchers demonstrated their findings with numerous two-dimensional systems, including covalent organic frameworks, which are endowed with adjustable and potentially very large pores.

“I am surprised that the complicated relationship between void space and van der Waals forces could be rationalized through such simple models,” said Yang. “In the same breath, I am really excited about our findings, as even  in the van der Waals forces can markedly impact the properties of molecules and materials.”

Explore further: Researchers refute textbook knowledge in molecular interactions

More information: Yan Yang et al, Influence of Pore Size on the van der Waals Interaction in Two-Dimensional Molecules and Materials, Physical Review Letters (2019).  DOI: 10.1103/PhysRevLett.122.026001 

The Future Of Energy Isn’t Fossil Fuels Or Renewables, It’s Nuclear Fusion (Really?)


 

Co State Nuc Fussion 2Colorado State University scientists, using a compact but powerful laser to heat arrays of ordered nanowires, have demonstrated micro-scale nuclear fusion in the lab.

Let’s pretend, for a moment, that the climate doesn’t matter. That we’re completely ignoring the connection between carbon dioxide, the Earth’s atmosphere, the greenhouse effect, global temperatures, ocean acidification, and sea-level rise. From a long-term point of view, we’d still need to plan for our energy future. Fossil fuels, which make up by far the majority of world-wide power today, are an abundant but fundamentally limited resource. Renewable sources like wind, solar, and hydroelectric power have different limitations: they’re inconsistent. There is a long-term solution, though, that overcomes all of these problems: nuclear fusion.

Even the most advanced chemical reactions, like combusting thermite, shown here, generate about a million times less energy per unit mass compared to a nuclear reaction.

Even the most advanced chemical reactions, like combusting thermite, shown here, generate about a million times less energy per unit mass compared to a nuclear reaction.NIKTHESTUNNED OF WIKIPEDIA

It might seem that the fossil fuel problem is obvious: we cannot simply generate more coal, oil, or natural gas when our present supplies run out. We’ve been burning pretty much every drop we can get our hands on for going on three centuries now, and this problem is going to get worse. Even though we have hundreds of years more before we’re all out, the amount isn’t limitless. There are legitimate, non-warming-related environmental concerns, too.

Even if we ignored the CO2-global climate change problem, fossil fuels are limited in the amount Earth contains, and also extracting, transporting, refining and burning them causes large amounts of pollution.

Even if we ignored the CO2-global climate change problem, fossil fuels are limited in the amount Earth contains, and also extracting, transporting, refining and burning them causes large amounts of pollution.GREG GOEBEL

The burning of fossil fuels generates pollution, since these carbon-based fuel sources contain a lot more than just carbon and hydrogen in their chemical makeup, and burning them (to generate energy) also burns all the impurities, releasing them into the air. In addition, the refining and/or extraction process is dirty, dangerous and can pollute the water table and entire bodies of water, like rivers and lakes.

Wind farms, like many other sources of renewable energy, are dependent on the environment in an inconsistent, uncontrollable way.

Wind farms, like many other sources of renewable energy, are dependent on the environment in an inconsistent, uncontrollable way.WINCHELL JOSHUA, U.S. FISH AND WILDLIFE SERVICE

On the other hand, renewable energy sources are inconsistent, even at their best. Try powering your grid during dry, overcast (or overnight), and drought-riddled times, and you’re doomed to failure. The sheer magnitude of the battery storage capabilities required to power even a single city during insufficient energy-generation conditions is daunting. Simultaneously, the pollution effects associated with creating solar panels, manufacturing wind or hydroelectric turbines, and (especially) with creating the materials needed to store large amounts of energy are tremendous as well. Even what’s touted as “green energy” isn’t devoid of drawbacks.

Reactor nuclear experimental RA-6 (Republica Argentina 6), en marcha. The blue glow is known as Cherenkov radiation, from the faster-than-light-in-water particles emitted.

Reactor nuclear experimental RA-6 (Republica Argentina 6), en marcha. The blue glow is known as Cherenkov radiation, from the faster-than-light-in-water particles emitted.CENTRO ATOMICO BARILOCHE, VIA PIECK DARÍO

But there is always the nuclear option. That word itself is enough to elicit strong reactions from many people: nuclear. The idea of nuclear bombs, of radioactive fallout, of meltdowns, and of disasters like Chernobyl, Three Mile Island, and Fukushima — not to mention residual fear from the Cold War — make “NIMBY” the default position for a large number of people. And that’s a fear that’s not wholly without foundation, when it comes to nuclear fission. But fission isn’t the only game in town.

Watch the Video: Nuclear Bomb – The First H Bomb Test

 

In 1952, the United States detonated Ivy Mike, the first demonstrated nuclear fusion reaction to occur on Earth. Whereas nuclear fission involves taking heavy, unstable (and already radioactive) elements like Thorium, Uranium or Plutonium, initiating a reaction that causes them to split apart into smaller, also radioactive components that release energy, nothing involved in fusion is radioactive at all. The reactants are light, stable elements like isotopes of hydrogen, helium or lithium; the products are also light and stable, like helium, lithium, beryllium or boron.

 

The proton-proton chain responsible for producing the vast majority of the Sun's power is an example of nuclear fusion.

The proton-proton chain responsible for producing the vast majority of the Sun’s power is an example of nuclear fusion.BORB / WIKIMEDIA COMMONS

So far, fission has taken place in either a runaway or controlled environment, rushing past the breakeven point (where the energy output is greater than the input) with ease, while fusion has never reached the breakeven point in a controlled setting. But four main possibilities have emerged. img_0787

  1. Inertial Confinement Fusion. We take a pellet of hydrogen — the fuel for this fusion reaction — and compress it using many lasers that surround the pellet. The compression causes the hydrogen nuclei to fuse into heavier elements like helium, and releases a burst of energy.
  2. Magnetic Confinement Fusion. Instead of using mechanical compression, why not let the electromagnetic force do the confining work? Magnetic fields confine a superheated plasma of fusible material, and nuclear fusion reactions occur inside a Tokamak-style reactor.
  3. Magnetized Target Fusion. In MTF, a superheated plasma is created and confined magnetically, but pistons surrounding it compress the fuel inside, creating a burst of nuclear fusion in the interior.
  4. Subcritical Fusion. Instead of trying to trigger fusion with heat or inertia, subcritical fusion uses a subcritical fission reaction — with zero chance of a meltdown — to power a fusion reaction.

The first two have been researched for decades now, and are the closest to the coveted breakeven point. But the latter two are new, with the last one gaining many new investors and start-ups this decade.

The preamplifiers of the National Ignition Facility are the first step in increasing the energy of laser beams as they make their way toward the target chamber. NIF recently achieved a 500 terawatt shot - 1,000 times more power than the United States uses at any instant in time.

The preamplifiers of the National Ignition Facility are the first step in increasing the energy of laser beams as they make their way toward the target chamber. NIF recently achieved a 500 terawatt shot – 1,000 times more power than the United States uses at any instant in time.DAMIEN JEMISON/LLNL

Even if you reject climate science, the problem of powering the world, and doing so in a sustainable, pollution-free way, is one of the most daunting long-term ones facing humanity. Nuclear fusion as a power source has never been given the necessary funding to develop it to fruition, but it’s the one physically possible solution to our energy needs with no obvious downsides. If we can get the idea that “nuclear” means “potential for disaster” out of our heads, people from all across the political spectrum just might be able to come together and solve our energy and environmental needs in one single blow. If you think the government should be investing in science with national and global payoffs, you can’t do better than the ROI that would come from successful fusion research. The physics works out beautifully; we now just need the investment and the engineering breakthroughs.

Special Contribution to Forbes by: Ethan Siegel 

How a ‘solar battery’ could bring electricity to rural areas – A ‘solar flow’ battery could “Harvest (energy) in the Daytime and Provide Electricity in the Evening


New solar flow battery with a 14.1 percent efficiency. Photo: David Tenenbaum, UW-Madison

Solar energy is becoming more and more popular as prices drop, yet a home powered by the Sun isn’t free from the grid because solar panels don’t store energy for later. Now, researchers have refined a device that can both harvest and store solar energy, and they hope it will one day bring electricity to rural and underdeveloped areas.

The problem of energy storage has led to many creative solutions, like giant batteries. For a paper published today in the journal Chem, scientists trying to improve the solar cells themselves developed an integrated battery that works in three different ways.

It can work like a normal solar cell by converting sunlight to electricity immediately, explains study author Song Jin, a chemist at the University of Wisconsin at Madison. It can store the solar energy, or it can simply be charged like a normal battery.

“IT COULD HARVEST IN THE DAYTIME, PROVIDE ELECTRICITY IN THE EVENING.”

It’s a combination of two existing technologies: solar cells that harvest light, and a so-called flow battery.

The most commonly used batteries, lithium-ion, store energy in solid materials, like various metals. Flow batteries, on the other hand, store energy in external liquid tanks.

What is A ‘Flow Battery’

This means they are very easy to scale for large projects. Scaling up all the components of a lithium-ion battery might throw off the engineering, but for flow batteries, “you just make the tank bigger,” says Timothy Cook, a University at Buffalo chemist and flow battery expert not involved in the study.

“You really simplify how to make the battery grow in capacity,” he adds. “We’re not making flow batteries to power a cell phone, we’re thinking about buildings or industrial sites.

Jin and his team were the first to combine the two features. They have been working on the battery for years, and have now reached 14.1 percent efficiency.

Jin calls this “round-trip efficiency” — as in, the efficiency from taking that energy, storing it, and discharging it. “We can probably get to 20 percent efficiency in the next few years, and I think 25 percent round-trip is not out of the question,” Jin says.

Apart from improving efficiency, Jin and his team want to develop a better design that can use cheaper materials.

The invention is still at proof-of-concept stage, but he thinks it could have a large impact in less-developed areas without power grids and proper infrastructure. “There, you could have a medium-scale device like this operate by itself,” he says. “It could harvest in the daytime, provide electricity in the evening.” In many areas, Jin adds, having electricity is a game changer, because it can help people be more connected or enable more clinics to be open and therefore improve health care.

And Cook notes that if the solar flow battery can be scaled, it can still be helpful in the US.

The United States might have plenty of power infrastructure, but with such a device, “you can disconnect and have personalized energy where you’re storing and using what you need locally,” he says. And that could help us be less dependent on forms of energy that harm the environment.

Researchers Develop Novel Two-Step CO2 Conversion Technology – Could aid in the production of valuable chemicals and fuels


CO2 Help U Delaware 181490_webUD Professor Feng Jiao’s team constructed an electrolyser, pictured here, to conduct their novel two-step conversion process.

 

A team of researchers at the University of Delaware’s Center for Catalytic Science and Technology (CCST) has discovered a novel two-step process to increase the efficiency of carbon dioxide (CO2) electrolysis, a chemical reaction driven by electrical currents that can aid in the production of valuable chemicals and fuels.

The results of the team’s study were published Monday, Aug. 20 in Nature Catalysis.

The research team, consisting of Feng Jiao, associate professor of chemical and biomolecular engineering, and graduate students Matthew Jouny and Wesley Luc, obtained their results by constructing a specialized three-chambered device called an electrolyser, which uses electricity to reduce CO2 into smaller molecules.

Compared to fossil fuels, electricity is a much more affordable and environmentally-friendly method for driving chemical processes to produce commercial chemicals and fuels. These can include ethylene, which is used in the production of plastics, and ethanol, a valuable fuel additive.

“This novel electrolysis technology provides a new route to achieve higher selectivities at incredible reaction rates, which is a major step towards commercial applications,” said Jiao, who also serves as associate director of CCST.

Whereas direct CO2 electrolysis is the standard method for reducing carbon dioxide, Jiao’s team broke the electrolysis process into two steps, reducing CO2 into carbon monoxide (CO) and then reducing the CO further into multi-carbon (C2+) products. This two-part approach, said Jiao, presents multiple advantages over the standard method.

“By breaking the process into two steps, we’ve obtained a much higher selectivity towards multi-carbon products than in direct electrolysis,” Jiao said. “The sequential reaction strategy could open up new ways to design more efficient processes for CO2 utilization.”

Electrolysis is also driving Jiao’s research with colleague Bingjun Xu, assistant professor of chemical and biomolecular engineering. In collaboration with researchers at Tianjin University in China, Jiao and Xu are designing a system that could reduce greenhouse gas emissions by using carbon-neutral solar electricity.

“We hope this work will bring more attention to this promising technology for further research and development,” Jiao said. “There are many technical challenges still be solved, but we are working on them!”

Energy Storage Technologies vie for Investment and Market Share – “And the Winners Are” …


One of the conveniences that makes fossil fuels hard to phase out is the relative ease of storing them, something that many of the talks at Advanced Energy Materials 2018 aimed to tackle as they laid out some of the advances in alternatives for energy storage.

Max Lu during the inaugural address at AEM 2018

“Energy is the biggest business in the world,” Max Lu, president and vice-chancellor of the University of Surrey, told attendees of Advanced Energy Materials 2018 at Surrey University earlier this month. But as

Lu, who has held numerous positions on senior academic boards and government councils, pointed out, the shear scale of the business means it takes time for one technology to replace another.

“Even if solar power were now cheaper than fossil fuel, it would be another 30 years before it replaced fossil fuel,” said Lu. And for any alternative technology to replace fossil fuels, some means of storing it is crucial.

Batteries beyond lithium ion cells

Lithium ion batteries have become ubiquitous for powering small portable devices.

But as Daniel ShuPing Lau, professor and head at Hong Kong Polytechnic University, and director of the University Research Facility in Materials pointed out, lithium is rare and high-cost, prompting the search for alternatives.

He described work on sodium ion batteries, where one of the key challenges has been the MnO2 electrode commonly used, which is prone to acid attack and disproportionation redox reactions.

Lau described work by his group and colleagues to get around the electrode stability issues using environmentally friendly K-birnessite MnO2 (K0.3MnO2) nanosheets, which they can inkjet print on paper as well as steel.

Their sodium ion batteries challenge the state of the art for energy storage devices with a working voltage of 2.5 V, maximum energy and power densities of 587 W h kgcathode−1 and 75 kW kgcathode−1, respectively, and a 99.5% capacity retention for 500 cycles at 1 A g−1.

Metal air batteries are another alternative to lithium-ion batteries, and Tan Wai Kan from Toyohashi University of Technology in Japan described the potential of using a carbon paper decorated with Fe2O3 nanoparticles in a metal air battery.

They increase the surface area of the electrode with a mesh structure to improve the efficiency, while using solid electrolyte KOHZrO2 instead of a liquid helped mitigate against the stability risks of hydrogen evolution for greater reliability and efficiency.

A winning write off for pseudosupercapacitors

Other challenges aside, when it comes to stability, supercapacitors leave most batteries far behind.

Because there is no mass movement, just charge, they tend to stay stable for not just hundreds but hundreds of thousands of cycles

They are already in use in the Shanghai bus system and the emergency doors on some aircraft as Robert Slade emeritus professor of inorganic and materials chemistry at the University of Surrey pointed out.

He described work on “pseudocapacitance”, a term popularised in the 1980s and 1990s to to describe a charge storage process that is by nature faradaic – that is, charge transport through redox processes – but where aspects of the behaviour is capacitive.

MnO2 is well known to impart pseudocapacitance in alkaline solutions but Slade and his colleagues focused on MoO3.

Although MnO3 is a lousy conductor, it accepts protons in acids to form HMoO, and exploiting the additional surface area of nanostructures further helps give access to the pseudocapacitance, so that the team were able to demonstrate a charge-discharge rate of 20 A g-1 for over 10,000 cycles.

This is competitive with MnO2 alkaline systems. “So don’t write off materials that other people have written off, such as MoO3, because a bit of “chemical trickery” can make them useful,” he concluded.

Down but not out for solid oxide fuel cells

But do we gain from the proliferation of so many different alternatives to fossil fuels? According to John Zhu, professor in the School of Chemical Engineering at the University of Queensland in Australia, “yes.”

For clean energy we need more than one solution,” was his response when queried on the point after his talk.

In particular he had a number of virtues to espouse with respect to solid oxide fuel cells (SOFCs), which had been the topic of his own presentation.

Besides the advantage of potential 24-7 operation, SOFCs generate the energy they store. As Zhu pointed out, “With a battery energy the source may still be dirty – so you are just moving the pollution from a high population density area to a low one.”

In contrast, an SOFC plant generates electricity directly from oxidizing a fuel, while at the same time it halves the CO2 emission of a coal-based counterpart, and achieves an efficiency of more than 60%.

If combined with hot water generation more than 80% efficiency is possible, which is double the efficiency of a conventional coal plant. All this is achieved with cheap materials as no noble metals are needed.

Too good to be true? It seemed so at one point as promising corporate ventures plummeted, one example being Ceramic Fuel Cells Ltd, which was formed in 1992 by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and a consortium of energy and industrial companies.

After becoming ASX listed in 2004, and opening production facilities in Australia and Germany, it eventually filed voluntary bankruptcy in 2015.

So “Are SOFCs going to die?” asked Zhu.

So long as funding is the lifeline of research apparently not, with the field continuing to attract investment from the US Department of Energy – including $6million for Fuel Cell Energy Inc. Share prices for GE Global Research and Bloom Energy have also doubled in the two months since July 2018, but Zhu highlights challenges that remain.

At €25,000 to install a 2 kW system he suggests that cost is not the issue so much as durability. While an SOFC plant’s lifetime should exceed 10 years, most don’t largely due to the high operating temperatures of 800–1000 °C, which lead to thermal degradation and seal failure. Lower operating temperatures would also allow faster start up and the use of cheaper materials.

The limiting factor for reducing temperatures is the cathode material, as its resistance is too high in cooler conditions. Possible alternative cathode materials do exist and include – 3D heterostructured electrodes La3MiO4 decorated Ba0.5Sr0.3Ce0.8Fe0.3O3 (BSCF with LN shell).

Photocatalysts all wrapped up

Other routes for energy on demand have looked at water splitting and CO2 reduction.

As Lu pointed out in his opening remarks, the success of these approaches hinge on engineering better catalysts, and here Somnath Roy from the Indian Institute of Technology Madras, in India, had some progress to report.

“TiO2 is to catalysis what silicon is to microelectronics,” he told attendees of his talk during the graphene energy materials session. However the photocatalytic activity of TiO2 peaks in the UV, and there have been many efforts to shift this closer to the visible as a result.

Building on previous work with composites of graphene and TiO2 he and his colleagues developed a process to produce well separated (to allow reaction space) TiO2 nanotubes wrapped in graphene.

Although they did not notice a wavelength shift in the peak catalytic activity to the visible due to the graphene, the catalysis did improve due to the effect on hole and electron transport.

There was no shortage of ideas at AEM 2018, but as Lu told attendees,

“Ultimately uptake does not depend on the best technology but the best return on investment.”

Speaking to Physics World  he added,

“The route to market for any energy materials will require systematic assessment of the technical advantages, market demand and a number of iterations of property-performance-system optimization, and open innovation and collaboration will be the name of the game for successful translation of materials to product or processes.”

Whatever technologies do eventually stick, time is of the essence. Most estimates place the tipping point for catastrophic global warming at 2050.

Allowing 30 years for the infrastructure overhaul that could allow alternative energies to totally replace fossil fuels leaves little more than a year for those technologies to pitch “the best return on investment”.

Little wonder advanced energy materials research is teaming.

Read More: Learn About:

Tenka Energy, Inc. Building Ultra-Thin Energy Dense SuperCaps and NexGen Nano-Enabled Pouch & Cylindrical Batteries – Energy Storage Made Small and POWERFUL!

Watch the YouTube Video: