MIT – 10 Technology Breakthroughs for 2019 Part II with Guest Curator – Bill Gates


MIT Nuclear 2 c-mod-internal-1

This is Part II of MIT’s  10 Technology Breakthroughs for 2019′ Re-Posted from MIT Technology Review, with Guest Curator Bill Gates. You can Read Part I Here

Part I Into from Bill Gates: How We’ll Invent the Future

was honored when MIT Technology Review invited me to be the first guest curator of its 10 Breakthrough Technologies. Narrowing down the list was difficult. I wanted to choose things that not only will create headlines in 2019 but captured this moment in technological history—which got me thinking about how innovation has evolved over time.

 

Robot dexterity

NICOLAS ORTEGA

  • Why it matters If robots could learn to deal with the messiness of the real world, they could do many more tasks.
  • Key Players OpenAI
    Carnegie Mellon University
    University of Michigan
    UC Berkeley
  • Availability 3-5 years

Robots are teaching themselves to handle the physical world.

For all the talk about machines taking jobs, industrial robots are still clumsy and inflexible. A robot can repeatedly pick up a component on an assembly line with amazing precision and without ever getting bored—but move the object half an inch, or replace it with something slightly different, and the machine will fumble ineptly or paw at thin air.

But while a robot can’t yet be programmed to figure out how to grasp any object just by looking at it, as people do, it can now learn to manipulate the object on its own through virtual trial and error.

One such project is Dactyl, a robot that taught itself to flip a toy building block in its fingers. Dactyl, which comes from the San Francisco nonprofit OpenAI, consists of an off-the-shelf robot hand surrounded by an array of lights and cameras. Using what’s known as reinforcement learning, neural-network software learns how to grasp and turn the block within a simulated environment before the hand tries it out for real. The software experiments, randomly at first, strengthening connections within the network over time as it gets closer to its goal.

It usually isn’t possible to transfer that type of virtual practice to the real world, because things like friction or the varied properties of different materials are so difficult to simulate. The OpenAI team got around this by adding randomness to the virtual training, giving the robot a proxy for the messiness of reality.

We’ll need further breakthroughs for robots to master the advanced dexterity needed in a real warehouse or factory. But if researchers can reliably employ this kind of learning, robots might eventually assemble our gadgets, load our dishwashers, and even help Grandma out of bed. —Will Knight

New-wave nuclear power

BOB MUMGAARD/PLASMA SCIENCE AND FUSION CENTER/MIT

Advanced fusion and fission reactors are edging closer to reality. 

New nuclear designs that have gained momentum in the past year are promising to make this power source safer and cheaper. Among them are generation IV fission reactors, an evolution of traditional designs; small modular reactors; and fusion reactors, a technology that has seemed eternally just out of reach. Developers of generation IV fission designs, such as Canada’s Terrestrial Energy and Washington-based TerraPower, have entered into R&D partnerships with utilities, aiming for grid supply (somewhat optimistically, maybe) by the 2020s.

Small modular reactors typically produce in the tens of megawatts of power (for comparison, a traditional nuclear reactor produces around 1,000 MW). Companies like Oregon’s NuScale say the miniaturized reactors can save money and reduce environmental and financial risks.

There has even been progress on fusion. Though no one expects delivery before 2030, companies like General Fusion and Commonwealth Fusion Systems, an MIT spinout, are making some headway. Many consider fusion a pipe dream, but because the reactors can’t melt down and don’t create long-lived, high-level waste, it should face much less public resistance than conventional nuclear. (Bill Gates is an investor in TerraPower and Commonwealth Fusion Systems.) —Leigh Phillips

NENOV | GETTY

Predicting preemies

  • Why it matters 15 million babies are born prematurely every year; it’s the leading cause of death for children under age five
  • Key player Akna Dx
  • Availability A test could be offered in doctor’s offices within five years

A simple blood test can predict if a pregnant woman is at risk of giving birth prematurely.

Our genetic material lives mostly inside our cells. But small amounts of “cell-free” DNA and RNA also float in our blood, often released by dying cells. In pregnant women, that cell-free material is an alphabet soup of nucleic acids from the fetus, the placenta, and the mother.

Stephen Quake, a bioengineer at Stanford, has found a way to use that to tackle one of medicine’s most intractable problems: the roughly one in 10 babies born prematurely.

Free-floating DNA and RNA can yield information that previously required invasive ways of grabbing cells, such as taking a biopsy of a tumor or puncturing a pregnant woman’s belly to perform an amniocentesis. What’s changed is that it’s now easier to detect and sequence the small amounts of cell-free genetic material in the blood. In the last few years researchers have begun developing blood tests for cancer (by spotting the telltale DNA from tumor cells) and for prenatal screening of conditions like Down syndrome.

The tests for these conditions rely on looking for genetic mutations in the DNA. RNA, on the other hand, is the molecule that regulates gene expression—how much of a protein is produced from a gene. By sequencing the free-floating RNA in the mother’s blood, Quake can spot fluctuations in the expression of seven genes that he singles out as associated with preterm birth. That lets him identify women likely to deliver too early. Once alerted, doctors can take measures to stave off an early birth and give the child a better chance of survival.

The technology behind the blood test, Quake says, is quick, easy, and less than $10 a measurement. He and his collaborators have launched a startup, Akna Dx, to commercialize it. —Bonnie Rochman

BRUCE PETERSON

Gut probe in a pill

Why it matters The device makes it easier to screen for and study gut diseases, including one that keeps millions of children in poor countries from growing properly

  • Key player Massachusetts General Hospital
  • Availability Now used in adults; testing in infants begins in 2019

A small, swallowable device captures detailed images of the gut without anesthesia, even in infants and children.

Environmental enteric dysfunction (EED) may be one of the costliest diseases you’ve never heard of. Marked by inflamed intestines that are leaky and absorb nutrients poorly, it’s widespread in poor countries and is one reason why many people there are malnourished, have developmental delays, and never reach a normal height. No one knows exactly what causes EED and how it could be prevented or treated.

Practical screening to detect it would help medical workers know when to intervene and how. Therapies are already available for infants, but diagnosing and studying illnesses in the guts of such young children often requires anesthetizing them and inserting a tube called an endoscope down the throat. It’s expensive, uncomfortable, and not practical in areas of the world where EED is prevalent.

So Guillermo Tearney, a pathologist and engineer at Massachusetts General Hospital (MGH) in Boston, is developing small devices that can be used to inspect the gut for signs of EED and even obtain tissue biopsies. Unlike endoscopes, they are simple to use at a primary care visit.

Tearney’s swallowable capsules contain miniature microscopes. They’re attached to a flexible string-like tether that provides power and light while sending images to a briefcase-like console with a monitor. This lets the health-care worker pause the capsule at points of interest and pull it out when finished, allowing it to be sterilized and reused. (Though it sounds gag-­inducing, Tearney’s team has developed a technique that they say doesn’t cause discomfort.) It can also carry technologies that image the entire surface of the digestive tract at the resolution of a single cell or capture three-dimensional cross sections a couple of millimeters deep.

The technology has several applications; at MGH it’s being used to screen for Barrett’s esophagus, a precursor of esophageal cancer. For EED, Tearney’s team has developed an even smaller version for use in infants who can’t swallow a pill. It’s been tested on adolescents in Pakistan, where EED is prevalent, and infant testing is planned for 2019.

The little probe will help researchers answer questions about EED’s development—such as which cells it affects and whether bacteria are involved—and evaluate interventions and potential treatments. —Courtney Humphrie

PAPER BOAT CREATIVE | GETTY

Custom cancer vaccines

  • Why it matters Conventional chemotherapies take a heavy toll on healthy cells and aren’t always effective against tumors
  • Key players BioNTech
    Genentech
  • Availability In human testing

The treatment incites the body’s natural defenses to destroy only cancer cells by identifying mutations unique to each tumor

Scientists are on the cusp of commercializing the first personalized cancer vaccine. If it works as hoped, the vaccine, which triggers a person’s immune system to identify a tumor by its unique mutations, could effectively shut down many types of cancers.

By using the body’s natural defenses to selectively destroy only tumor cells, the vaccine, unlike conventional chemotherapies, limits damage to healthy cells. The attacking immune cells could also be vigilant in spotting any stray cancer cells after the initial treatment.

The possibility of such vaccines began to take shape in 2008, five years after the Human Genome Project was completed, when geneticists published the first sequence of a cancerous tumor cell.

Soon after, investigators began to compare the DNA of tumor cells with that of healthy cells—and other tumor cells. These studies confirmed that all cancer cells contain hundreds if not thousands of specific mutations, most of which are unique to each tumor.

A few years later, a German startup called BioNTech provided compelling evidence that a vaccine containing copies of these mutations could catalyze the body’s immune system to produce T cells primed to seek out, attack, and destroy all cancer cells harboring them.

In December 2017, BioNTech began a large test of the vaccine in cancer patients, in collaboration with the biotech giant Genentech. The ongoing trial is targeting at least 10 solid cancers and aims to enroll upwards of 560 patients at sites around the globe.

The two companies are designing new manufacturing techniques to produce thousands of personally customized vaccines cheaply and quickly. That will be tricky because creating the vaccine involves performing a biopsy on the patient’s tumor, sequencing and analyzing its DNA, and rushing that information to the production site. Once produced, the vaccine needs to be promptly delivered to the hospital; delays could be deadly. —Adam Pior

BRUCE PETERSON/STYLING: MONICA MARIANO

The cow-free burger

  • Why it matters Livestock production causes catastrophic deforestation, water pollution, and greenhouse-gas emissions
  • Key players Beyond Meat
    Impossible Foods
  • Availability Plant-based now; lab-grown around 2020

Both lab-grown and plant-based alternatives approximate the taste and nutritional value of real meat without the environmental devastation.

The UN expects the world to have 9.8 billion people by 2050. And those people are getting richer. Neither trend bodes well for climate change—especially because as people escape poverty, they tend to eat more meat.

By that date, according to the predictions, humans will consume 70% more meat than they did in 2005. And it turns out that raising animals for human consumption is among the worst things we do to the environment.

Depending on the animal, producing a pound of meat protein with Western industrialized methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20 times more fossil fuels than producing a pound of plant protein.

The problem is that people aren’t likely to stop eating meat anytime soon. Which means lab-grown and plant-based alternatives might be the best way to limit the destruction.

Making lab-grown meat involves extracting muscle tissue from animals and growing it in bioreactors. The end product looks much like what you’d get from an animal, although researchers are still working on the taste. Researchers at Maastricht University in the Netherlands, who are working to produce lab-grown meat at scale, believe they’ll have a lab-grown burger available by next year. One drawback of lab-grown meat is that the environmental benefits are still sketchy at best—a recent World Economic Forum report says the emissions from lab-grown meat would be only around 7% less than emissions from beef production.

The better environmental case can be made for plant-based meats from companies like Beyond Meat and Impossible Foods (Bill Gates is an investor in both companies), which use pea proteins, soy, wheat, potatoes, and plant oils to mimic the texture and taste of animal meat.

Beyond Meat has a new 26,000-square-foot (2,400-square-meter) plant in California and has already sold upwards of 25 million burgers from 30,000 stores and restaurants. According to an analysis by the Center for Sustainable Systems at the University of Michigan, a Beyond Meat patty would probably generate 90% less in greenhouse-gas emissions than a conventional burger made from a cow. —Markkus Rovito

 

NICO ORTEGA

Carbon dioxide catcher

  • Why it matters Removing CO2 from the atmosphere might be one of the last viable ways to stop catastrophic climate change
  • Key players Carbon Engineering
    Climeworks
    Global Thermostat
  • Availability 5-10 years

 

Practical and affordable ways to capture carbon dioxide from the air can soak up excess greenhouse-gas emissions.

Even if we slow carbon dioxide emissions, the warming effect of the greenhouse gas can persist for thousands of years. To prevent a dangerous rise in temperatures, the UN’s climate panel now concludes, the world will need to remove as much as 1 trillion tons of carbon dioxide from the atmosphere this century.

In a surprise finding last summer, Harvard climate scientist David Keith calculated that machines could, in theory, pull this off for less than $100 a ton, through an approach known as direct air capture. That’s an order of magnitude cheaper than earlier estimates that led many scientists to dismiss the technology as far too expensive—though it will still take years for costs to fall to anywhere near that level.

But once you capture the carbon, you still need to figure out what to do with it.

Carbon Engineering, the Canadian startup Keith cofounded in 2009, plans to expand its pilot plant to ramp up production of its synthetic fuels, using the captured carbon dioxide as a key ingredient. (Bill Gates is an investor in Carbon Engineering.)

Zurich-based Climeworks’s direct air capture plant in Italy will produce methane from captured carbon dioxide and hydrogen, while a second plant in Switzerland will sell carbon dioxide to the soft-drinks industry. So will Global Thermostat of New York, which finished constructing its first commercial plant in Alabama last year.

Still, if it’s used in synthetic fuels or sodas, the carbon dioxide will mostly end up back in the atmosphere. The ultimate goal is to lock greenhouse gases away forever. Some could be nested within products like carbon fiber, polymers, or concrete, but far more will simply need to be buried underground, a costly job that no business model seems likely to support.

In fact, pulling CO2 out of the air is, from an engineering perspective, one of the most difficult and expensive ways of dealing with climate change. But given how slowly we’re reducing emissions, there are no good options left. —James Temple

BRUCE PETERSON

An ECG on your wrist

Regulatory approval and technological advances are making it easier for people to continuously monitor their hearts with wearable devices.

Fitness trackers aren’t serious medical devices. An intense workout or loose band can mess with the sensors that read your pulse. But an electrocardiogram—the kind doctors use to diagnose abnormalities before they cause a stroke or heart attack— requires a visit to a clinic, and people often fail to take the test in time.

ECG-enabled smart watches, made possible by new regulations and innovations in hardware and software, offer the convenience of a wearable device with something closer to the precision of a medical one.

An Apple Watch–compatible band from Silicon Valley startup AliveCor that can detect atrial fibrillation, a frequent cause of blood clots and stroke, received clearance from the FDA in 2017. Last year, Apple released its own FDA-cleared ECG feature, embedded in the watch itself.

The health-device company Withings also announced plans for an ECG-equipped watch shortly after.
Current wearables still employ only a single sensor, whereas a real ECG has 12. And no wearable can yet detect a heart attack as it’s happening.

But this might change soon. Last fall, AliveCor presented preliminary results to the American Heart Association on an app and two-­sensor system that can detect a certain type of heart attack. —Karen Hao

THEDMAN | GETTY

Sanitation without sewers

  • Why it matters 2.3 billion people lack safe sanitation, and many die as a result
  • Key players Duke University
    University of South Florida
    Biomass Controls
    California Institute of Technology
  • Availability 1-2 years

 

Energy-efficient toilets can operate without a sewer system and treat waste on the spot.

About 2.3 billion people don’t have good sanitation. The lack of proper toilets encourages people to dump fecal matter into nearby ponds and streams, spreading bacteria, viruses, and parasites that can cause diarrhea and cholera. Diarrhea causes one in nine child deaths worldwide.

Now researchers are working to build a new kind of toilet that’s cheap enough for the developing world and can not only dispose of waste but treat it as well.

In 2011 Bill Gates created what was essentially the X Prize in this area—the Reinvent the Toilet Challenge. Since the contest’s launch, several teams have put prototypes in the field. All process the waste locally, so there’s no need for large amounts of water to carry it to a distant treatment plant.

Most of the prototypes are self-contained and don’t need sewers, but they look like traditional toilets housed in small buildings or storage containers. The NEWgenerator toilet, designed at the University of South Florida, filters out pollutants with an anaerobic membrane, which has pores smaller than bacteria and viruses. Another project, from Connecticut-based Biomass Controls, is a refinery the size of a shipping container; it heats the waste to produce a carbon-rich material that can, among other things, fertilize soil.

One drawback is that the toilets don’t work at every scale. The Biomass Controls product, for example, is designed primarily for tens of thousands of users per day, which makes it less well suited for smaller villages. Another system, developed at Duke University, is meant to be used only by a few nearby homes.

So the challenge now is to make these toilets cheaper and more adaptable to communities of different sizes. “It’s great to build one or two units,” says Daniel Yeh, an associate professor at the University of South Florida, who led the NEWgenerator team. “But to really have the technology impact the world, the only way to do that is mass-produce the units.” —Erin Winick

BRUCE PETERSON

Smooth-talking AI assistants

  • Why it matters AI assistants can now perform conversation-based tasks like booking a restaurant reservation or coordinating a package drop-off rather than just obey simple commands
  • Key players Google
    Alibaba
    Amazon
  • Availability 1-2 years

 

New techniques that capture semantic relationships between words are making machines better at understanding natural language.

We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.

But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.

These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.

Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.

In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.

But while AI programs have gotten better at figuring out what you want, they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding. Once we cross that hurdle, we’ll see yet another evolution, perhaps from logistics coordinator to babysitter, teacher—or even friend? —Karen Hao

Advertisements

New Hybrid solar cells harness energy from … raindrops?


Renewable energy is the cleanest and inexhaustible source of energy. They are a great alternative to fossil fuels.

Renewable energy doesn’t emit any greenhouse gases in the environment. They are environment-friendly and help us tackle the most important concern of the 21st Century – Climate Change.

Solar is one of the most important forms of renewable energy. Sun is an inexhaustible source of energy and solar cells help capture that clean energy for both commercial and domestic purposes. Despite all these advantages, Solar cells are not efficient when it comes to producing energy during rainy seasons. Since the input energy gets reduced, solar cells become practically useless when rain clouds are overhead.

But what if we could overcome this problem?  What if we could actually generate energy from raindrops?

Scientists from the University of Soochow, China have overcome the design flaw of solar cells by allowing them to generate energy both in the sunny and rainy season.

This technology holds the potential of revolutionizing renewable energy completely.

The key part of this new Hybrid solar technology is the triboelectric nanogenerator or TENG. A device capable of producing an electric charge from the friction of two materials rubbing together.

How Hybrid solar cells work?

These new hybrid solar cells works using a material called Graphene. It has the ability to produce energy from raindrops.

Like any other solar panel, these hybrid solar cells also generate electricity during a normal sunny day using the current technology, but when cloud gathers and raindrop falls, this solar panels system switch to its graphene system.

Graphene, in its liquid form, can produce electricity due to the presence of delocalized electrons that help us create a pseudocapacitor framework. This pseudo framework helps us generate electricity.

When raindrops fall on hybrid solar panels, they get separated as positive ions and negative ions.

These positive ions are mainly salt-related ions, like sodium and calcium which accumulates on the surface of graphene. These positive ions interact with the loosely associated negative ions in graphene and create a system that acts like a pseudocapacitor.

The difference in potential between these ions produces current and voltage.

Although, it is important to mention that this is not a first attempt to invent all-weathered Solar panels. Earlier, researchers created a solar panel with triboelectric nanogenerator on top, an insulating layer in the middle and solar panel at the bottom. But this system possessed too much electrical resistance and sunlight was not able to reach the solar cells due to the opaque nature of insulators.

The newly designed hybrid solar panel is an efficient device, where the triboelectric nanogenerator and the solar panel share a common and transparent electrode. There are special grooves incorporated in the material which increases the efficiency of both raindrops and sunlight captured.

According to the researchers, the idea of special grooves was derived from commercial DVD’s. DVD’s come pre-etched with parallel grooves just hundreds of nanometer across. Designing the device with this grooves helps to boost the surface interaction of raindrops and sunlight that would be otherwise lost to reflection.

Benefits of Solar Hybrid Panels  

Until now solar cells have this drawback of producing energy only in the presence of sunlight, making it impossible to harness energy during the rainy season. Countries in the northern hemisphere were not able to switch to solar energy due to the presence of low-intensity sunlight.

With hybrid solar panels, anyone in the world could harness solar power. Researchers expect that in a few years, these panels will be efficient enough to provide electricity for homes and businesses and thus ending our dependency on fossil fuels.

They will also save a lot of money on daily electricity bills. Even though the initial setup costs are higher, countries with good exposure to both sunlight and rain can expect a good ROI.

Hurdles in Solar hybrid panels    

The current designs are not efficient enough to be used commercially. The device was tested in various simulated weather conditions, in sunlight, the device was able to produce around 13% efficiency and simulated raindrops had an efficiency of around 6%.

Currently used commercial solar cells gives an efficiency of around 15%, thus the new design is a viable option for presently used solar panels. However, the efficiency of triboelectric nanogenerators was not reported.

Conclusion

With continuous depletion of non-renewable sources and the disastrous climate change occurring due to fossil fuels, many countries are moving towards eco-friendly alternatives. Solar energy is one of the cleanest energy available. With the advent of new technology like the hybrid solar panels, we can hope to achieve a viable method of electricity generation.

Researchers are continuously trying to improve the efficiency of hybrid solar cells in order to make it commercially available. This will boost our efforts of producing energy in all-weather condition, which is not possible with the currently available technology. With the expansion of solar energy projects worldwide, researchers of hybrid solar cells are expecting to roll out commercial designs in next five years.

Researchers at china are even trying to integrate this new technology into mobile and electronic device such as electronic clothing.     

Researchers Just Found a Way to Turn CO2 Into Plastic With Unprecedented Efficiency


Researchers have developed catalysts that can convert carbon dioxide—the main cause of global warming—into plastics, fabrics, resins, and other products.

The electrocatalysts are the first materials, aside from enzymes, that can turn carbon dioxide and water into carbon building blocks containing one, two, three, or four carbon atoms with more than 99 percent efficiency.

Two of the products—methylglyoxal (C3) and 2,3-furandiol (C4)—can be used as precursors for plastics, adhesives, and pharmaceuticals. Toxic formaldehyde could be replaced by methylglyoxal, which is safer.

(Karin Calvinho/Rutgers University-New Brunswick)

The discovery, based on the chemistry of artificial photosynthesis, is detailed in the journal Energy & Environmental Science .

“Our breakthrough could lead to the conversion of carbon dioxide into valuable products and raw materials in the chemical and pharmaceutical industries,” says senior author Charles Dismukes, a professor in the chemistry and chemical biology department and the biochemistry and microbiology department at Rutgers University–New Brunswick.

He is also a principal investigator at Rutgers’ Waksman Institute of Microbiology.

Previously, scientists showed that carbon dioxide can be electrochemically converted into methanol, ethanol, methane, and ethylene with relatively high yields.

But such production is inefficient and too costly to be commercially feasible, according to lead author Karin Calvinho, a chemistry doctoral student.

Using five catalysts made of nickel and phosphorus, which are cheap and abundant, however, researchers can electrochemically convert carbon dioxide and water into a wide array of carbon-based products, she says.

The choice of catalyst and other conditions determine how many carbon atoms can be stitched together to make molecules or even generate longer polymers. In general, the longer the carbon chain, the more valuable the product.

The next step is to learn more about the underlying chemical reaction, so it can be used to produce other valuable products such as diols, which are widely used in the polymer industry, or hydrocarbons that can be used as renewable fuels. The researchers are designing, building, and testing electrolyzers for commercial use.

Based on their work, the researchers have earned patents for the electrocatalysts and formed RenewCO₂, a start-up company.

The research has been published in the journal  Energy & Environmental Science .

Source: Rutgers University

NREL: Envisioning Net-Zero Emission Energy Systems


NREL researchers contribute to a major journal article describing pathways to net-zero emissions for particularly difficult-to-decarbonize economic sectors

As global energy consumption continues to grow—by some projections, more than doubling by 2100—all sectors of the economy will need to find ways to drastically reduce their carbon dioxide emissions if average global temperatures are to be held under international climate targets. Two NREL authors contributed to a recently published article in Science that examined potential barriers and opportunities to decarbonizing certain energy systems that are essential to modern civilization but remain stubbornly reliant on carbon-emitting processes.

Difficult to Decarbonize Energy Sectors Contribute 27% of Carbon Emissions

Many sectors of the economy, such as light-duty transportation, heating, cooling, and lighting, could be straightforward to decarbonize through electrification and use of low- or net-zero-emitting energy sources. However, some energy uses, such as aviation, long-distance transport and shipping, steel and cement production, and a highly reliable electricity supply, will be more difficult to decarbonize. Together, these sectors contribute 27% of global carbon emissions today. With global demand for many of these sectors growing rapidly, solutions are urgently needed, the article’s authors write.

“The timeframes and economic costs of any energy transition are enormous. Most technologies installed today will have a lifetime of perhaps 30 to 50 years and the transition from research to actual deployment can also be quite lengthy,” said Bri-Mathias Hodge, an author on the paper and manager of the Power Systems Design and Studies Group at NREL. “Because of this we need to be able to identify the most pertinent issues that will need to be solved fairly far in the future and get started now, before we find ourselves heavily invested in even more carbon-intensive, long-term infrastructure.”

Diverse Expert Perspectives Informed Study

Discussion of the article’s underlying issues began at an Aspen Global Change Institute meeting in July 2016. “The diversity and depth of expertise at the workshop—and contributing to the paper—were outstanding,” said Doug Arent, the other NREL researcher to contribute to the paper and deputy associate lab director for Scientific Computing and Energy Analysis. “It was great to hear the different perspectives and learn about new areas that are related to our work at NREL, but that I don’t get to hear about every day at NREL,” added Hodge.

Considering demographic trends, institutional barriers, and economic and technological constraints, the group of researchers concluded that future net-zero emission systems will depend critically on integration of now-discrete energy industries. Although a range of existing low or net zero emitting energy technologies exist for these energy services, they may only be able to fully meet future energy demands through cross-sector coordination. Collaboration could speed research and development of new technologies and coordinating operations across sectors could better utilize capital-intensive assets, create broader markets, and streamline regulations.

Research Should Pursue Technologies and Integration to Decarbonize These Sectors

The article’s authors suggest two broad research thrusts: research in technologies and processes that could decarbonize these energy services, and research in systems integration to provide these energy services in a more reliable and cost-effective way.

The Science article concludes by stating, “if we want to achieve a robust, reliable, affordable, net-zero emissions energy system later this century, we must be researching, developing, demonstrating, and deploying those candidate technologies now.”

Our Environment: An Underwater Irish Canyon Is Sucking CO2 Out of the Atmosphere (We heard the Irish were good at “drinking” but … )


Porcupine_Bank_and_Seabight,_NE_Atlantic

Northeast Atlantic bathymetry, with Porcupine Bank and the Porcupine Seabight labelled.

A research expedition to a huge underwater canyon off the Irish coast has shed light on a hidden process that sucks the greenhouse gas carbon dioxide (CO2) out of the atmosphere.

Researchers led by a team from the University College Cork (UCC) took an underwater research drone by boat out to Porcupine Bank Canyon — a massive, cliff-walled underwater trench where Ireland’s continental shelf ends — to build a detailed map of its boundaries and interior. Along the way, the researchers reported in a statement, they noted a process at the edge of the canyon that pulls CO2 from the atmosphere and buries it deep under the sea.

ColdWaterCoral_largeAll around the rim of the canyon live cold-water corals, which thrive on dead plankton raining down from the ocean surface. Those tiny, surface-dwelling plankton build their bodies out of carbon extracted from CO2 in the air. Then, when they die, the coral on the seafloor consume them and build their bodies out of the same carbon. Over time, as the coral die and the cliff faces shift and crumble, which sends the coral   falling deep into the canyon. There, the carbon pretty much stays put for long periods. [ In Photos: ROV Explores Deep-Sea Marianas Trench

There’s evidence that a lot of carbon is moving this way; the researchers said they found “significant” dead coral buildup at the canyon bottom.

This process doesn’t move nearly enough carbon dioxide to prevent climate change, the researchers said. But it does shed light on yet another mechanism that keeps the planet’s CO2 levels regulated when human industry doesn’t interfere.

“Increasing CO2 concentrations in our atmosphere are causing our extreme weather,” Andy Wheeler, a UCC geoscientist and one of the researchers on the expedition, said in the statement. “Oceans absorb this CO2 and canyons are a rapid route for pumping it into the deep ocean where it is safely stored away.”

The mapping expedition covered an area about the size of Chicago and revealed places where the canyon has moved and shifted significantly in the past.

“We took cores with the ROV, and the sediments reveal that although the canyon is quiet now, periodically it is a violent place where the seabed gets ripped up and eroded,” Wheeler said.

The expedition will return to shore today (Aug. 10).

Related

Will underwater drones bring a sea change to naval – and nuclear – warfare? 

maratime-unmanned-500

 

Forbes on Energy: We Don’t Need Solar And Wind To Save The Climate — And It’s A Good Thing, Too


France and Sweden show solar and wind are not needed to [+] Special Contributor, M. Shellenberger

For 30 years, experts have claimed that humankind needs to switch to solar and wind energy to address climate change. But do we really?

Consider the fact that, while no nation has created a near-zero carbon electricity supply out of solar and wind, the only successful efforts to create near-zero carbon electricity supplies didn’t require solar or wind whatsoever.

As such solar and wind aren’t just insufficient, they are also unnecessary for solving climate change.

That turns out to be a good thing.

Sunlight and wind are inherently unreliable and energy-dilute. As such, adding solar panels and wind turbines to the grid in large quantities increases the cost of generating electricity, locks in fossil fuels, and increases the environmental footprint of energy production.

There is a better way. But to understand what it is, we first must understand the modern history of renewable energies.

Renewables Revolution: Always Just Around the Corner

Most people think of solar and wind as new energy sources. In fact, they are two of our oldest.

The predecessor to Stanford University Professor Mark Jacobson, who advocates “100 percent renewables,” is A man named John Etzler.

In 1833, Etzler proposed to build massive solar power plants that used mirrors to concentrate sunlight on boilers, mile-long wind farms, and new dams to store power.

Even electricity-generating solar panels and wind turbines are old. Both date back to the late 1800s.

Throughout the 20th Century, scientists claimed — and the media credulously reported — that solar, wind, and batteries were close to a breakthrough that would allow them to power all of civilization.

Consider these headlines from The New York Times and other major newspapers:

• 1891: “Solar Energy: What the Sun’s Rays Can Do and May Yet Be Able to Do“ — The author notes that while solar energy was not yet economical “…the day is not unlikely to arrive before long…”

• 1923: “World Awaits Big Invention to Meet Needs of Masses “…solar energy may be developed… or tidal energy… or solar energy through the production of fuel.”

• 1931: “Use of Solar Energy Near a Solution.” “Improved Device Held to Rival Hydroelectric Production”

• 1934: “After Coal, The Sun” “…surfaces of copper oxide already available”

• 1935: “New Solar Engine Gives Cheap Power”

• 1939. “M.I.T. Will ‘Store’ Heat of the Sun”

• 1948: “Changing Solar Energy into Fuel “Blocked Out” in GM Laboratory”  “…the most difficult part of the problem is over…”

• 1949: “U.S. Seeks to Harness Sun, May Ask Big Fund, Krug Says”

Reporters were as enthusiastic about renewables in 1930s as they are today.

“It is just possible the world is standing at a turning point,” a New York Times reporter gushed in 1931, “in the evolution of civilization similar to that which followed the invention by James Watt of the steam engine.”

Decade after decade, scientists and journalists re-discovered how much solar energy fell upon the earth.

“Even on such an area as small as Manhattan Island the noontime heat is enough, could it be utilized, to drive all the steam engines in the world,” The Washington Star reported in 1891.

Progress in chemistry and materials sciences was hyped. “Silver Selenide is Key Substance,” The New York Times assured readers.

In 1948, Interior Secretary Krug called for a clean energy moonshot consisting of “hundreds of millions” for solar energy, pointing to its “tremendous potential.”

R&D subsidies for solar began shortly after and solar and wind production subsidies began in earnest in the 1970s.

Solar and wind subsidies increased substantially, and were increased in 2005 and again in 2009 on the basis of a breakthrough being just around the corner.

By 2016, renewables were receiving 94 times more in U.S. subsidies than nuclear and 46 times more than fossil fuels per unit of energy generated.

According to Bloomberg New Energy Finance (BNEF), public and private actors spent $1.1 trillion on solar and over $900 billion on wind between 2007 and 2016.

Global investment in solar and wind hovered at around $300 billion per year between 2010 and 2016.

Did the solar and wind energy revolution arrive?

Judge for yourself: in 2016, solar and wind constituted 1.3 and 3.9 percent of the planet’s electricity, respectively.

Real World Renewables

Are there places in the world where wind and solar have become a significant share of electricity supplies?

The best real-world evidence for wind’s role in decarbonization comes from the nation of Denmark. By 2017, wind and solar had grown to become 48 and 3 percent of Denmark’s electricity.

Does that make Denmark a model?

Not exactly. Denmark has fewer people than Wisconsin, a land area smaller than West Virginia, and an economy smaller than the state of Washington.

Moreover, the reason Denmark was able to deploy so much wind was because it could easily export excess wind electricity to neighboring countries — albeit at a high cost: Denmark today has the most expensive electricity in Europe.

And as one of the world’s largest manufacturers of turbines, Denmark could justify expensive electricity as part of its export strategy.

As for solar, those U.S. states that have deployed the most of it have seen sharp rises in their electricity costs and prices compared to the national average.

As recently as two years ago, some renewable energy advocates held up Germany as a model for the world.

No more. While Germany has deployed some of the most solar and wind in the world, its emissions have been flat for a decade while its electricity has become the second most expensive in Europe.

More recently, Germany has permitted the demolition of old forests, churches, and villages in order to mine and burn coal.

Meanwhile, the two nations whose electricity sectors produce some of the least amount of carbon emissions per capita of any developed nation did so with very little solar and wind: France and Sweden.

Sweden last year generated a whopping 95 percent of its total electricity from zero-carbon sources, with 42 and 41 coming from nuclear and hydroelectric power.

France generated 88 percent of its total electricity from zero-carbon sources, with 72 and 10 coming from nuclear and hydroelectric power.

Other nations like Norway, Brazil, and Costa Rica have almost entirely decarbonized their electricity supplies with the use of hydroelectricity alone.

That being said, hydroelectricity is far less reliable and scalable than nuclear.

Brazil is A case in point. Hydro has fallen from over 90 percent of its electricity 20 years ago to about two-thirds in 2016. Because Brazil failed to grow its nuclear program in the 1990s, it made up for new electricity growth with fossil fuels.

And both Brazil and hydro-heavy California stand as warnings against relying on hydro-electricity in a period of climate change. Both had to use fossil fuels to make up for hydro during recent drought years.

That leaves us with nuclear power as the only truly scalable, reliable, low-carbon energy source proven capable of eliminating carbon emissions from the power sector.

Why This is Good News

The fact that we don’t need renewables to solve climate change is good news for humans and the natural environment.

The dilute nature of water, sunlight, and wind means that up to 5,000 times more land and 10 – 15 times more concrete, cement, steel, and glass, are required than for nuclear plants.

All of that material throughput results in renewables creating large quantities of waste, much of it toxic.

For example, solar panels create 200 – 300 times more hazardous waste than nuclear, with none of it required to be recycled or safely contained outside of the European Union.

Meanwhile, the huge amounts of land required for solar and wind production has had a devastating impact on rare and threatened desert tortoises, bats, and eagles — even when solar and wind are at just a small percentage of electricity supplies.

Does this mean renewables are never desirable?

Not necessarily. Hydroelectric dams remain the way many poor countries gain access to reliable electricity, and both solar and wind might be worthwhile in some circumstances.

But there is nothing in either their history or their physical attributes that suggests solar and wind in particular could or should be the centerpiece of efforts to deal with climate change.

In fact, France demonstrates the costs and consequences of adding solar and wind to an electricity system where decarbonization is nearly complete.

France is already seeing its electricity prices rise as a result of deploying more solar and wind.

Because France lacks Sweden’s hydroelectric potential, it would need to burn far more natural gas (and/or petroleum) in order to integrate significantly more solar and wind.

If France were to reduce the share of its electricity from nuclear from 75 percent to 50 percent — as had been planned — carbon emissions and the cost of electricity would rise.

It is partly for this reason that France’s president recently declared he would not reduce the amount of electricity from nuclear.

Some experts recently pointed out that nuclear plants, like hydroelectric dams, can ramp up and down. France currently does so to balance demand.

But ramping nuclear plants to accommodate intermittent electricity from solar and wind simply adds to the cost of making electricity without delivering fewer emissions or much in the way of cost-savings. That’s because only very small amounts of nuclear fuel and no labor is saved when nuclear plants are ramped down.

Do We Need Solar and Wind to Save Nuclear?

While solar and wind are largely unnecessary at best and counterproductive at worst when it comes to combating climate change, might we need to them in support of a political compromise to prevent nuclear plants from closing?

At least in some circumstances, the answer is yes. Recently in New Jersey, for example, nuclear energy advocates had to accept a subsidy rate 18 to 28 times higher for solar than for nuclear.

The extremely disproportionate subsidy for solar was a compromise in exchange for saving the state’s nuclear plants.

While nuclear enjoys the support of just half of the American people, for example, solar and wind are supported by 70 to 80 percent of them. Thus, in some cases, it might make sense to package nuclear and renewables together.

But we should be honest that such subsidies for solar and wind are policy sweeteners needed to win over powerful financial interests and not good climate policy.

What matters most is that we accept that there are real world physical obstacles to scaling solar and wind.

Consider that the problem of the unreliability of solar has been discussed for as long as there have existed solar panels. During all of that time, solar advocates have waved their hands about potential future solutions.

“Serious problems will, of course, be raised by the fact that sun-power will not be continuous,” wrote a New York Times reporter in 1931. “Whether these will be solved by some sort of storage arrangement or by the operating of photogenerators in conjuction with some other generator cannot be said at present.”

We now know that, in the real world, electricity grid managers cope with the unreliability of solar by firing up petroleum and natural gas generators.

As such —  while there might be good reasons to continue to subsidize the production of solar and wind — their role in locking in fossil fuel generators means that climate change should not be one of them.

Watch a YouTube Video on Our Latest Project

Solar-to-Fuel System Recycles CO2 to Make Ethanol and Ethylene: Berkeley National Lab



Schematic of a solar-powered electrolysis cell which converts carbon dioxide into hydrocarbon and oxygenate products with an efficiency far higher than natural photosynthesis. Power-matching electronics allow the system to operate over a range of sun conditions. (Credit: Clarissa Towle/Berkeley Lab)

Berkeley Lab advance is first demonstration of efficient, light-powered production of fuel via artificial photosynthesis

Scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have harnessed the power of photosynthesis to convert carbon dioxide into fuels and alcohols at efficiencies far greater than plants. The achievement marks a significant milestone in the effort to move toward sustainable sources of fuel.

Many systems have successfully reduced carbon dioxide to chemical and fuel precursors, such as carbon monoxide or a mix of carbon monoxide and hydrogen known as syngas. This new work, described in a study published in the journal Energy and Environmental Science, is the first to successfully demonstrate the approach of going from carbon dioxide directly to target products, namely ethanol and ethylene, at energy conversion efficiencies rivaling natural counterparts.

The researchers did this by optimizing each component of a photovoltaic-electrochemical system to reduce voltage loss, and creating new materials when existing ones did not suffice.

“This is an exciting development,” said study principal investigator Joel Ager, a Berkeley Lab scientist with joint appointments in the Materials Sciences and the Chemical Sciences divisions. “As rising atmospheric CO2 levels change Earth’s climate, the need to develop sustainable sources of power has become increasingly urgent. Our work here shows that we have a plausible path to making fuels directly from sunlight.”

That sun-to-fuel path is among the key goals of the Joint Center for Artificial Photosynthesis (JCAP), a DOE Energy Innovation Hub established in 2010 to advance solar fuel research. The study was conducted at JCAP’s Berkeley Lab campus.

The initial focus of JCAP research was tackling the efficient splitting of water in the photosynthesis process. Having largely achieved that task using several types of devices, JCAP scientists doing solar-driven carbon dioxide reduction began setting their sights on achieving efficiencies similar to those demonstrated for water splitting, considered by many to be the next big challenge in artificial photosynthesis.

Another research group at Berkeley Lab is tackling this challenge by focusing on a specific component in a photovoltaic-electrochemical system. In a study published today, they describe a new catalyst that can achieve carbon dioxide to multicarbon conversion using record-low inputs of energy.

Not just for noon


For this JCAP study, researchers engineered a complete system to work at different times of day, not just at a light energy level of 1-sun illumination, which is equivalent to the peak of brightness at high noon on a sunny day. They varied the brightness of the light source to show that the system remained efficient even in low light conditions.

When the researchers coupled the electrodes to silicon photovoltaic cells, they achieved solar conversion efficiencies of 3 to 4 percent for 0.35 to 1-sun illumination. Changing the configuration to a high-performance, tandem solar cell connected in tandem yielded a conversion efficiency to hydrocarbons and oxygenates exceeding 5 percent at 1-sun illumination.

Copper-Silver Cathode

At left is a surface view of a bimetallic copper-silver nanocoral cathode taken from a scanning electron micrograph. To the right is an energy-dispersive X-ray image of the cathode with the copper (in pink/red) and silver (in green) highlighted. (Credit: Gurudayal/Berkeley Lab)

“We did a little dance in the lab when we reached 5 percent,” said Ager, who also holds an appointment as an adjunct professor at UC Berkeley’s Materials Science and Engineering Department.

Among the new components developed by the researchers are a copper-silver nanocoral cathode, which reduces the carbon dioxide to hydrocarbons and oxygenates, and an iridium oxide nanotube anode, which oxidizes the water and creates oxygen.

“The nice feature of the nanocoral is that, like plants, it can make the target products over a wide range of conditions, and it is very stable,” said Ager.

The researchers characterized the materials at the National Center for Electron Microscopy at the Molecular Foundry, a DOE Office of Science User Facility at Berkeley Lab. The results helped them understand how the metals functioned in the bimetallic cathode. Specifically, they learned that silver aids in the reduction of carbon dioxide to carbon monoxide, while the copper picks up from there to reduce carbon monoxide further to hydrocarbons and alcohols.

Seeking better, low-energy breakups



Because carbon dioxide is a stubbornly stable molecule, breaking it up typically involves a significant input of energy.
“Reducing CO2 to a hydrocarbon end product like ethanol or ethylene can take up to 5 volts, start to finish,” said study lead author Gurudayal, postdoctoral fellow at Berkeley Lab. “Our system reduced that by half while maintaining the selectivity of products.”

Notably, the electrodes operated well in water, a neutral pH environment.

“Research groups working on anodes mostly do so using alkaline conditions since anodes typically require a high pH environment, which is not ideal for the solubility of CO2,” said Gurudayal. “It is very difficult to find an anode that works in neutral conditions.”

The researchers customized the anode by growing the iridium oxide nanotubes on a zinc oxide surface to create a more uniform surface area to better support chemical reactions.

“By working through each step so carefully, these researchers demonstrated a level of performance and efficiency that people did not think was possible at this point,” said Berkeley Lab chemist Frances Houle, JCAP deputy director for Science and Research Integration, who was not part of the study. “This is a big step forward in the design of devices for efficient CO2 reduction and testing of new materials, and it provides a clear framework for the future advancement of fully integrated solar-driven CO2-reduction devices.”

Other co-authors on the study include James Bullock, a Berkeley Lab postdoctoral researcher in materials sciences, who was instrumental in engineering the system’s photovoltaic and electrolysis cell pairing. Bullock works in the lab of study co-author Ali Javey, Berkeley Lab senior faculty scientist and a UC Berkeley professor of electrical engineering and computer sciences.

This work is supported by the DOE Office of Science.

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel Prizes. 
The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit http://www.lbl.gov.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

NREL, Swiss Scientists Power Past Solar Efficiency Records


NREL scientist Adele Tamboli, co-author of a recent article on silicon-based multijunction solar cells, stands in front of an array of solar panels. Credit: Dennis Schroeder

August 25, 2017




Second collaborative effort proves silicon-based multijunction cells that reach nearly 36% efficiency

Collaboration between researchers at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL), the Swiss Center for Electronics and Microtechnology (CSEM), and the École Polytechnique Fédérale de Lausanne (EPFL) shows the high potential of silicon-based multijunction solar cells.

The research groups created tandem solar cells with record efficiencies of converting sunlight into electricity under 1-sun illumination. The resulting paper, “Raising the One-Sun Conversion Efficiency of III–V/Si Solar Cells to 32.8% for Two Junctions and 35.9% for Three Junctions,” appears in the new issue of Nature Energy. Solar cells made solely from materials in Groups III and V of the Periodic Table have shown high efficiencies, but are more expensive.

Stephanie Essig, a former NREL post-doctoral researcher now working at EPFL in Switzerland, is lead author of the newly published research that details the steps taken to improve the efficiency of the multijunction cell. While at NREL, Essig co-authored “Realization of GaInP/Si Dual-Junction Solar Cells with 29.8% 1-Sun Efficiency,” which was published in the IEEE Journal of Photovoltaics a year ago.

In addition to Essig, authors of the new research paper are Timothy Remo, John F. Geisz, Myles A. Steiner, David L. Young, Kelsey Horowitz, Michael Woodhouse, and Adele Tamboli, all with NREL; and Christophe Allebe, Loris Barraud, Antoine Descoeudres, Matthieu Despeisse, and Christophe Ballif, all from CSEM.

“This achievement is significant because it shows, for the first time, that silicon-based tandem cells can provide efficiencies competing with more expensive multijunction cells consisting entirely of III-V materials,” Tamboli said. “It opens the door to develop entirely new multijunction solar cell materials and architectures.”

In testing silicon-based multijunction solar cells, the researchers found that the highest dual-junction efficiency (32.8%) came from a tandem cell that stacked a layer of gallium arsenide (GaAs) developed by NREL atop a film of crystalline silicon developed by CSEM. An efficiency of 32.5% was achieved using a gallium indium phosphide (GaInP) top cell, which is a similar structure to the previous record efficiency of 29.8% announced in January 2016. 

A third cell, consisting of a GaInP/GaAs tandem cell stacked on a silicon bottom cell, reached a triple-junction efficiency of 35.9%—just 2% below the overall triple-junction record.

The existing photovoltaics market is dominated by modules made of single-junction silicon solar cells, with efficiencies between 17% and 24%. 

The researchers noted in the report that making the transition from a silicon single-junction cell to a silicon-based dual-junction solar cell will enable manufacturers to push efficiencies past 30% while still benefiting from their expertise in making silicon solar cells.

The obstacle to the adoption of these multijunction silicon-based solar cells, at least in the near term, is the cost. Assuming 30% efficiency, the researchers estimated the GaInP-based cell would cost $4.85 per watt and the GaAs-based cell would cost $7.15 per watt. 

But as manufacturing ramps up and the efficiencies of these types of cells climbs to 35%, the researchers predict the cost per watt could fall to 66 cents for a GaInP-based cell and to 85 cents for the GaAs-based cell. 

The scientists noted that such a precipitous price drop is not unprecedented; for instance, the cost of Chinese-made photovoltaic modules fell from $4.50 per watt in 2006 to $1 per watt in 2011.

The cost of a solar module in the United States accounts for 20% to 40% of the price of a photovoltaic system. Increasing cell efficiency to 35%, the researchers estimated, could reduce the system cost by as much as 45 cents per watt for commercial installations. 

However, if the costs of a III-V cell cannot be reduced to the levels of the researchers’ long-term scenario, then the use of cheaper, high-efficiency materials for the top cell will be needed to make them cost-competitive in general power markets.

The funding for the research came from the Energy Department’s SunShot Initiative—which aims to make solar energy a low-cost electricity source for all Americans through research and development efforts in collaboration with public and private partners—and from the Swiss Confederation and the Nano-Tera.ch initiative.


NREL is the U.S. Department of Energy’s primary national laboratory for renewable energy and energy efficiency research and development. NREL is operated for the Energy Department by The Alliance for Sustainable Energy, LLC.

Army COE Creates New Energy Efficient ‘Graphene Oxide’ Water Filter at Commercial Scale



The Army Corps of Engineers have successfully created a usable prototype of a new type of water filter.

The membranes are made of a mixture of chitosan, a material commonly found in shrimp shells, and a new synthetic chemical known as “graphene oxide”. Graphene oxide is a highly researched chemical worldwide.

  According to the Army Corps, one problem encountered by scientists working with graphene oxide is not being able to synthesize the material on a scale that can be put to use.

“One of the major breakthroughs that we’ve had here is that with our casting process, we’re not limited by size,” explains Luke Gurtowski, a research chemical engineer working on the membranes.


These filters have been found to effectively remove a number of different contaminants commonly found in water.

Dr. Christopher Griggs is the research scientist in charge of overseeing development of the new membranes.

Dr. Griggs told us, “Anybody who’s experienced water shortages or has been concerned about their water quality, or any type of contaminants in the water, this type of technology certainly works to address that.”

Another challenged faced by conventional water filtering methods is maintaining high energy efficiency.

“It requires a lot of energy for the net driving pressure to force the water through the membrane,” Dr. Griggs explains. “…we’re going to have to look to new materials to try to get those efficiency gains, and so graphene oxide is a very promising candidate for that.”

The Engineer Research and Development Center currently has two patents associated with the new filters and hopes to apply them for both civil and military purposes in the near future. 

Google’s Parent Company Will Soon Compete With Tesla for Energy Storage Solutions: Project Malta at ‘Alphabet X’



Maximizing Renewables



Given the dramatic impact human-made carbon emissions are having on our planet, cleaner energy sources have become increasingly popular alternatives to their fossil fuel counterparts. Currently, solar and wind are the most widely used renewable energy sources, but both are dependent on certain conditions.

The former can capture energy only during daylight hours, while the latter is more unpredictable, but often peaks at night.
As such, there’s a mismatch between when solar and wind energy are available and when energy is needed.

The world needs a way to maximize renewable energy usage, and that’s what Malta, a project currently brewing at Alphabet X, the “moonshot” factory by Google’s parent company, is hoping to provide.

The goal of Alphabet X is to develop technologies that could “someday make the world a radically better place.” The organization follows a three-part blueprint for their moonshot projects that starts with identifying a “huge problem” and then providing a “radical solution” that could be implemented using a “breakthrough technology.”

For Malta, the idea was to find a way to maximize the use of energy generated from renewables. Their radical solution is bridging the gap between renewable energy and grid-scale energy storage technologies using a breakthrough technology developed by Stanford physicist and Nobel laureate Robert Laughlin.

According to the project’s website, this technology is still theoretical and involves storing electricity as either heat within molten salt or cold within a liquid similar to the antifreeze used in cars. They claim this energy could remain stored for up to weeks at a time.

Storing Energy


Essentially, Malta is hoping to develop clean and cost-effective energy storage devices, which is similar to the concept behind Tesla’s Powerpack. The difference between the Malta project’s tech and the Powerpack is mostly what’s inside. While Tesla’s energy storage device uses 16 individual battery pods, Malta’s relies on molten salt or the antifreeze-like liquid.

Additionally, the tanks used to store the salt used by Malta’s system could potentially last for up to 40 years, which the project claims is three or more times longer than other current storage options. That extended lifespan would make Malta a cheaper alternative to other renewable energy storage devices.
alphabet x malta renewable energy.

Image credit: Malta/X

After two years of developing and designing their system, the Malta team is now gearing up to test the commercial viability of their technology. “The next step is to build a megawatt-scale prototype plant which would be large enough to prove the technology at commercial scale,” according to their website.
We now have multiple ways to generate energy from renewables, but if we ever hope to fully transition away from traditional energy solutions, we need better storage devices.

Though they are clearly better for the environment, renewables aren’t as consistent as fossil fuels, and that unreliability is a huge barrier to widespread adoption.

Storage systems like those proposed by Malta could collect the energy generated by renewables and ensure it is available to power grids whenever needed, putting us one step closer to a future completely free of fossil fuels.

Watch Our Video on a New Energy Storage Company for Nano-Enabled Batteries and Super Capacitors

Update: Super Capacitor Assisted Silicon Nanowire Batteries for EV and Small Form Factor Markets. A New Class of Battery /Energy Storage Materials is being developed to support the High Energy – High Capacity – High Performance High Cycle Battery Markets.

“Ultrathin Asymmetric Porous-Nickel Graphene-Based
Supercapacitor with High Energy Density and Silicon Nanowire,”

A New Generation Battery that is:

 Energy Dense
 High Specific Power
 Simple Manfacturing Process
 Low Manufacturing Cost
 Rapid Charge/ Re-Charge
 Flexible Form Factor
 Long Warranty Life
 Non-Toxic
 Highly Scalable

Key Markets & Commercial Applications

 EV, (18650 & 21700); Drone and Marine Batteries
 Wearable Electronics and The Internet of Things
 Estimated $240 Billion Market by 2025