There has long been a debate about Apple’s secretive automotive project being only about a self-driving system for vehicles rather than a full electric autonomous vehicle. It now looks clear that the latter is the case as Apple hires Tesla’s head of electric powertrains.
We described his departure from Tesla as a big loss for the company since he is amongst the most experienced engineers who have brought electric powertrain programs to market, not just at Tesla, but in the industry as a whole.
When Schwekutsch joined Tesla back in 2015, we described his background:
“Michael Schwekutsch joined Tesla last year to lead powertrain developments after a two-decade long career working for legendary third-party powertrain engineering firms like BorgWarner and GKN Driveline. More recently, he managed programs for the electric and hybrid powertrains of the BMW i8, Porsche 918 Spyder, Fiat 500eV, Volvo XC90, among other popular vehicles.
Today, he is responsible for Tesla’s drive units from the design and engineering to the manufacturing and validation – all operations currently done at the Tesla Factory in Fremont, California.”
At Tesla, he participated in the development of “leading edge Drive Systems like the one of the Tesla Roadster II and Tesla Semi / Tesla Truck.”
Now Electrek learns from separate sources that he joined Apple’s Special Project Group, which includes the Cupertino company’s Project Titan division.
He is the latest of several top Tesla engineers to join the project, which was for a time thought to only consist of a self-driving system for vehicles after a scale-back of the plan.
Now that Schwekutsch, who has exclusively worked on electric powertrains over the last decade, has joined Apple, it is becoming clear that the company plans to bring a complete electric vehicle to market.
Schwekutsch will join back Doug Field, who was a longtime engineering executive at Tesla before going back to Apple to lead their car project last year alongside Bob Mansfield, who Apple brought out ofretirement in 2016 to leadits Project Titan car team.
Electrek has learned that Apple is also hiring several other former Tesla employees in what appears to be another wave of the poaching war between the two companies.
At the height of it back in 2015, Tesla CEO Elon Musk said about Apple:
“They have hired people we’ve fired. We always jokingly call Apple the ‘Tesla Graveyard.”
More recently, however, Apple has hired some longtime executives and engineers that don’t appear to have been let go by Tesla. That said, the company has laid off many employees over the last year and some of them did go to Apple, which has experienced employment cut-backs of its own.
This is quite significant. Apple producing an electric vehicle from the ground up is a big deal.
Granted, they have no experience building vehicles, but they are hiring some top talent that made happened against all odds in the past, like Field and Schwekutsch.
If you add to that the hundreds of billions in capital and the incredible software and hardware expertise of Apple, I think you have a winning solution.
I don’t want to get my hopes up to much, but I am excited for them to disrupt the space even more. I can see it accelerate the adoption of electric vehicles.
This is Part II of MIT’s 10 Technology Breakthroughs for 2019′ Re-Posted from MIT Technology Review, with Guest Curator Bill Gates. You can Read Part I Here
Part I Into from Bill Gates: How We’ll Invent the Future
I was honored when MIT Technology Review invited me to be the first guest curator of its 10 Breakthrough Technologies. Narrowing down the list was difficult. I wanted to choose things that not only will create headlines in 2019 but captured this moment in technological history—which got me thinking about how innovation has evolved over time.
Robot dexterity
NICOLAS ORTEGA
Why it matters If robots could learn to deal with the messiness of the real world, they could do many more tasks.
Key Players OpenAI
Carnegie Mellon University
University of Michigan
UC Berkeley
Availability 3-5 years
Robots are teaching themselves to handle the physical world.
For all the talk about machines taking jobs, industrial robots are still clumsy and inflexible. A robot can repeatedly pick up a component on an assembly line with amazing precision and without ever getting bored—but move the object half an inch, or replace it with something slightly different, and the machine will fumble ineptly or paw at thin air.
But while a robot can’t yet be programmed to figure out how to grasp any object just by looking at it, as people do, it can now learn to manipulate the object on its own through virtual trial and error.
One such project is Dactyl, a robot that taught itself to flip a toy building block in its fingers. Dactyl, which comes from the San Francisco nonprofit OpenAI, consists of an off-the-shelf robot hand surrounded by an array of lights and cameras. Using what’s known as reinforcement learning, neural-network software learns how to grasp and turn the block within a simulated environment before the hand tries it out for real. The software experiments, randomly at first, strengthening connections within the network over time as it gets closer to its goal.
It usually isn’t possible to transfer that type of virtual practice to the real world, because things like friction or the varied properties of different materials are so difficult to simulate. The OpenAI team got around this by adding randomness to the virtual training, giving the robot a proxy for the messiness of reality.
We’ll need further breakthroughs for robots to master the advanced dexterity needed in a real warehouse or factory. But if researchers can reliably employ this kind of learning, robots might eventually assemble our gadgets, load our dishwashers, and even help Grandma out of bed. —Will Knight
New-wave nuclear power
BOB MUMGAARD/PLASMA SCIENCE AND FUSION CENTER/MIT
Advanced fusion and fission reactors are edging closer to reality.
New nuclear designs that have gained momentum in the past year are promising to make this power source safer and cheaper. Among them are generation IV fission reactors, an evolution of traditional designs; small modular reactors; and fusion reactors, a technology that has seemed eternally just out of reach. Developers of generation IV fission designs, such as Canada’s Terrestrial Energy and Washington-based TerraPower, have entered into R&D partnerships with utilities, aiming for grid supply (somewhat optimistically, maybe) by the 2020s.
Small modular reactors typically produce in the tens of megawatts of power (for comparison, a traditional nuclear reactor produces around 1,000 MW). Companies like Oregon’s NuScale say the miniaturized reactors can save money and reduce environmental and financial risks.
From sodium-cooled fission to advanced fusion, a fresh generation of projects hopes to rekindle trust in nuclear energy.
There has even been progress on fusion. Though no one expects delivery before 2030, companies like General Fusion and Commonwealth Fusion Systems, an MIT spinout, are making some headway. Many consider fusion a pipe dream, but because the reactors can’t melt down and don’t create long-lived, high-level waste, it should face much less public resistance than conventional nuclear. (Bill Gates is an investor in TerraPower and Commonwealth Fusion Systems.) —Leigh Phillips
NENOV | GETTY
Predicting preemies
Why it matters 15 million babies are born prematurely every year; it’s the leading cause of death for children under age five
Key player Akna Dx
Availability A test could be offered in doctor’s offices within five years
A simple blood test can predict if a pregnant woman is at risk of giving birth prematurely.
Our genetic material lives mostly inside our cells. But small amounts of “cell-free” DNA and RNA also float in our blood, often released by dying cells. In pregnant women, that cell-free material is an alphabet soup of nucleic acids from the fetus, the placenta, and the mother.
Stephen Quake, a bioengineer at Stanford, has found a way to use that to tackle one of medicine’s most intractable problems: the roughly one in 10 babies born prematurely.
Free-floating DNA and RNA can yield information that previously required invasive ways of grabbing cells, such as taking a biopsy of a tumor or puncturing a pregnant woman’s belly to perform an amniocentesis. What’s changed is that it’s now easier to detect and sequence the small amounts of cell-free genetic material in the blood. In the last few years researchers have begun developing blood tests for cancer (by spotting the telltale DNA from tumor cells) and for prenatal screening of conditions like Down syndrome.
The tests for these conditions rely on looking for genetic mutations in the DNA. RNA, on the other hand, is the molecule that regulates gene expression—how much of a protein is produced from a gene. By sequencing the free-floating RNA in the mother’s blood, Quake can spot fluctuations in the expression of seven genes that he singles out as associated with preterm birth. That lets him identify women likely to deliver too early. Once alerted, doctors can take measures to stave off an early birth and give the child a better chance of survival.
Complications from preterm birth are the leading cause of death worldwide in children under five.
The technology behind the blood test, Quake says, is quick, easy, and less than $10 a measurement. He and his collaborators have launched a startup, Akna Dx, to commercialize it. —Bonnie Rochman
BRUCE PETERSON
Gut probe in a pill
Why it matters The device makes it easier to screen for and study gut diseases, including one that keeps millions of children in poor countries from growing properly
Key player Massachusetts General Hospital
Availability Now used in adults; testing in infants begins in 2019
A small, swallowable device captures detailed images of the gut without anesthesia, even in infants and children.
Environmental enteric dysfunction (EED) may be one of the costliest diseases you’ve never heard of. Marked by inflamed intestines that are leaky and absorb nutrients poorly, it’s widespread in poor countries and is one reason why many people there are malnourished, have developmental delays, and never reach a normal height. No one knows exactly what causes EED and how it could be prevented or treated.
Practical screening to detect it would help medical workers know when to intervene and how. Therapies are already available for infants, but diagnosing and studying illnesses in the guts of such young children often requires anesthetizing them and inserting a tube called an endoscope down the throat. It’s expensive, uncomfortable, and not practical in areas of the world where EED is prevalent.
So Guillermo Tearney, a pathologist and engineer at Massachusetts General Hospital (MGH) in Boston, is developing small devices that can be used to inspect the gut for signs of EED and even obtain tissue biopsies. Unlike endoscopes, they are simple to use at a primary care visit.
Tearney’s swallowable capsules contain miniature microscopes. They’re attached to a flexible string-like tether that provides power and light while sending images to a briefcase-like console with a monitor. This lets the health-care worker pause the capsule at points of interest and pull it out when finished, allowing it to be sterilized and reused. (Though it sounds gag-inducing, Tearney’s team has developed a technique that they say doesn’t cause discomfort.) It can also carry technologies that image the entire surface of the digestive tract at the resolution of a single cell or capture three-dimensional cross sections a couple of millimeters deep.
The technology has several applications; at MGH it’s being used to screen for Barrett’s esophagus, a precursor of esophageal cancer. For EED, Tearney’s team has developed an even smaller version for use in infants who can’t swallow a pill. It’s been tested on adolescents in Pakistan, where EED is prevalent, and infant testing is planned for 2019.
The little probe will help researchers answer questions about EED’s development—such as which cells it affects and whether bacteria are involved—and evaluate interventions and potential treatments. —Courtney Humphrie
PAPER BOAT CREATIVE | GETTY
Custom cancer vaccines
Why it matters Conventional chemotherapies take a heavy toll on healthy cells and aren’t always effective against tumors
Key players BioNTech
Genentech
Availability In human testing
The treatment incites the body’s natural defenses to destroy only cancer cells by identifying mutations unique to each tumor
Scientists are on the cusp of commercializing the first personalized cancer vaccine. If it works as hoped, the vaccine, which triggers a person’s immune system to identify a tumor by its unique mutations, could effectively shut down many types of cancers.
By using the body’s natural defenses to selectively destroy only tumor cells, the vaccine, unlike conventional chemotherapies, limits damage to healthy cells. The attacking immune cells could also be vigilant in spotting any stray cancer cells after the initial treatment.
The possibility of such vaccines began to take shape in 2008, five years after the Human Genome Project was completed, when geneticists published the first sequence of a cancerous tumor cell.
Soon after, investigators began to compare the DNA of tumor cells with that of healthy cells—and other tumor cells. These studies confirmed that all cancer cells contain hundreds if not thousands of specific mutations, most of which are unique to each tumor.
A few years later, a German startup called BioNTech provided compelling evidence that a vaccine containing copies of these mutations could catalyze the body’s immune system to produce T cells primed to seek out, attack, and destroy all cancer cells harboring them.
In December 2017, BioNTech began a large test of the vaccine in cancer patients, in collaboration with the biotech giant Genentech. The ongoing trial is targeting at least 10 solid cancers and aims to enroll upwards of 560 patients at sites around the globe.
The two companies are designing new manufacturing techniques to produce thousands of personally customized vaccines cheaply and quickly. That will be tricky because creating the vaccine involves performing a biopsy on the patient’s tumor, sequencing and analyzing its DNA, and rushing that information to the production site. Once produced, the vaccine needs to be promptly delivered to the hospital; delays could be deadly. —Adam Pior
BRUCE PETERSON/STYLING: MONICA MARIANO
The cow-free burger
Why it matters Livestock production causes catastrophic deforestation, water pollution, and greenhouse-gas emissions
Key players Beyond Meat
Impossible Foods
Availability Plant-based now; lab-grown around 2020
Both lab-grown and plant-based alternatives approximate the taste and nutritional value of real meat without the environmental devastation.
The UN expects the world to have 9.8 billion people by 2050. And those people are getting richer. Neither trend bodes well for climate change—especially because as people escape poverty, they tend to eat more meat.
By that date, according to the predictions, humans will consume 70% more meat than they did in 2005. And it turns out that raising animals for human consumption is among the worst things we do to the environment.
Depending on the animal, producing a pound of meat protein with Western industrialized methods requires 4 to 25 times more water, 6 to 17 times more land, and 6 to 20 times more fossil fuels than producing a pound of plant protein.
The problem is that people aren’t likely to stop eating meat anytime soon. Which means lab-grown and plant-based alternatives might be the best way to limit the destruction.
Making lab-grown meat involves extracting muscle tissue from animals and growing it in bioreactors. The end product looks much like what you’d get from an animal, although researchers are still working on the taste. Researchers at Maastricht University in the Netherlands, who are working to produce lab-grown meat at scale, believe they’ll have a lab-grown burger available by next year. One drawback of lab-grown meat is that the environmental benefits are still sketchy at best—a recent World Economic Forum report says the emissions from lab-grown meat would be only around 7% less than emissions from beef production.
Meat production spews tons of greenhouse gas and uses up too much land and water. Is there an alternative that won’t make us do without?
The better environmental case can be made for plant-based meats from companies like Beyond Meat and Impossible Foods (Bill Gates is an investor in both companies), which use pea proteins, soy, wheat, potatoes, and plant oils to mimic the texture and taste of animal meat.
Beyond Meat has a new 26,000-square-foot (2,400-square-meter) plant in California and has already sold upwards of 25 million burgers from 30,000 stores and restaurants. According to an analysis by the Center for Sustainable Systems at the University of Michigan, a Beyond Meat patty would probably generate 90% less in greenhouse-gas emissions than a conventional burger made from a cow. —Markkus Rovito
NICO ORTEGA
Carbon dioxide catcher
Why it matters Removing CO2 from the atmosphere might be one of the last viable ways to stop catastrophic climate change
Key players Carbon Engineering
Climeworks
Global Thermostat
Availability 5-10 years
Practical and affordable ways to capture carbon dioxide from the air can soak up excess greenhouse-gas emissions.
Even if we slow carbon dioxide emissions, the warming effect of the greenhouse gas can persist for thousands of years. To prevent a dangerous rise in temperatures, the UN’s climate panel now concludes, the world will need to remove as much as 1 trillion tons of carbon dioxide from the atmosphere this century.
In a surprise finding last summer, Harvard climate scientist David Keith calculated that machines could, in theory, pull this off for less than $100 a ton, through an approach known as direct air capture. That’s an order of magnitude cheaper than earlier estimates that led many scientists to dismiss the technology as far too expensive—though it will still take years for costs to fall to anywhere near that level.
But once you capture the carbon, you still need to figure out what to do with it.
Carbon Engineering, the Canadian startup Keith cofounded in 2009, plans to expand its pilot plant to ramp up production of its synthetic fuels, using the captured carbon dioxide as a key ingredient. (Bill Gates is an investor in Carbon Engineering.)
Zurich-based Climeworks’s direct air capture plant in Italy will produce methane from captured carbon dioxide and hydrogen, while a second plant in Switzerland will sell carbon dioxide to the soft-drinks industry. So will Global Thermostat of New York, which finished constructing its first commercial plant in Alabama last year.
Klaus Lackner’s once wacky idea increasingly looks like an essential part of solving climate change.
Still, if it’s used in synthetic fuels or sodas, the carbon dioxide will mostly end up back in the atmosphere. The ultimate goal is to lock greenhouse gases away forever. Some could be nested within products like carbon fiber, polymers, or concrete, but far more will simply need to be buried underground, a costly job that no business model seems likely to support.
In fact, pulling CO2 out of the air is, from an engineering perspective, one of the most difficult and expensive ways of dealing with climate change. But given how slowly we’re reducing emissions, there are no good options left. —James Temple
BRUCE PETERSON
An ECG on your wrist
Regulatory approval and technological advances are making it easier for people to continuously monitor their hearts with wearable devices.
Fitness trackers aren’t serious medical devices. An intense workout or loose band can mess with the sensors that read your pulse. But an electrocardiogram—the kind doctors use to diagnose abnormalities before they cause a stroke or heart attack— requires a visit to a clinic, and people often fail to take the test in time.
ECG-enabled smart watches, made possible by new regulations and innovations in hardware and software, offer the convenience of a wearable device with something closer to the precision of a medical one.
An Apple Watch–compatible band from Silicon Valley startup AliveCor that can detect atrial fibrillation, a frequent cause of blood clots and stroke, received clearance from the FDA in 2017. Last year, Apple released its own FDA-cleared ECG feature, embedded in the watch itself.
Making complex heart tests available at the push of a button has far-reaching consequences.
The health-device company Withings also announced plans for an ECG-equipped watch shortly after.
Current wearables still employ only a single sensor, whereas a real ECG has 12. And no wearable can yet detect a heart attack as it’s happening.
But this might change soon. Last fall, AliveCor presented preliminary results to the American Heart Association on an app and two-sensor system that can detect a certain type of heart attack. —Karen Hao
THEDMAN | GETTY
Sanitation without sewers
Why it matters 2.3 billion people lack safe sanitation, and many die as a result
Key players Duke University
University of South Florida
Biomass Controls
California Institute of Technology
Availability 1-2 years
Energy-efficient toilets can operate without a sewer system and treat waste on the spot.
About 2.3 billion people don’t have good sanitation. The lack of proper toilets encourages people to dump fecal matter into nearby ponds and streams, spreading bacteria, viruses, and parasites that can cause diarrhea and cholera. Diarrhea causes one in nine child deaths worldwide.
Now researchers are working to build a new kind of toilet that’s cheap enough for the developing world and can not only dispose of waste but treat it as well.
In 2011 Bill Gates created what was essentially the X Prize in this area—the Reinvent the Toilet Challenge. Since the contest’s launch, several teams have put prototypes in the field. All process the waste locally, so there’s no need for large amounts of water to carry it to a distant treatment plant.
Most of the prototypes are self-contained and don’t need sewers, but they look like traditional toilets housed in small buildings or storage containers. The NEWgenerator toilet, designed at the University of South Florida, filters out pollutants with an anaerobic membrane, which has pores smaller than bacteria and viruses. Another project, from Connecticut-based Biomass Controls, is a refinery the size of a shipping container; it heats the waste to produce a carbon-rich material that can, among other things, fertilize soil.
One drawback is that the toilets don’t work at every scale. The Biomass Controls product, for example, is designed primarily for tens of thousands of users per day, which makes it less well suited for smaller villages. Another system, developed at Duke University, is meant to be used only by a few nearby homes.
So the challenge now is to make these toilets cheaper and more adaptable to communities of different sizes. “It’s great to build one or two units,” says Daniel Yeh, an associate professor at the University of South Florida, who led the NEWgenerator team. “But to really have the technology impact the world, the only way to do that is mass-produce the units.” —Erin Winick
BRUCE PETERSON
Smooth-talking AI assistants
Why it matters AI assistants can now perform conversation-based tasks like booking a restaurant reservation or coordinating a package drop-off rather than just obey simple commands
Key players Google
Alibaba
Amazon
Availability 1-2 years
New techniques that capture semantic relationships between words are making machines better at understanding natural language.
We’re used to AI assistants—Alexa playing music in the living room, Siri setting alarms on your phone—but they haven’t really lived up to their alleged smarts. They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations.
But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.
These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.
Some are already here. Google Duplex, the eerily human-like upgrade of Google Assistant, can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.
In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.
But while AI programs have gotten better at figuring out what you want, they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding. Once we cross that hurdle, we’ll see yet another evolution, perhaps from logistics coordinator to babysitter, teacher—or even friend? —Karen Hao
In this Two (2) Part Re-Post from MIT Technology Review 10 Breakthrough Technologies for 2019. Guest Curator Bill Gates has been asked to choose this year’s list of inventions that will change the world for the better.
Part I: Bill Gates: How we’ll Invent the Future
I was honored when MIT Technology Review invited me to be the first guest curator of its 10 Breakthrough Technologies. Narrowing down the list was difficult. I wanted to choose things that not only will create headlines in 2019 but captured this moment in technological history—which got me thinking about how innovation has evolved over time.
My mind went to—of all things—the plow. Plows are an excellent embodiment of the history of innovation. Humans have been using them since 4000 BCE, when Mesopotamian farmers aerated soil with sharpened sticks. We’ve been slowly tinkering with and improving them ever since, and today’s plows are technological marvels.
But what exactly is the purpose of a plow? It’s a tool that creates more: more seeds planted, more crops harvested, more food to go around. In places where nutrition is hard to come by, it’s no exaggeration to say that a plow gives people more years of life. The plow—like many technologies, both ancient and modern—is about creating more of something and doing it more efficiently, so that more people can benefit.
Contrast that with lab-grown meat, one of the innovations I picked for this year’s 10 Breakthrough Technologies list. Growing animal protein in a lab isn’t about feeding more people. There’s enough livestock to feed the world already, even as demand for meat goes up. Next-generation protein isn’t about creating more—it’s about making meat better. It lets us provide for a growing and wealthier world without contributing to deforestation or emitting methane. It also allows us to enjoy hamburgers without killing any animals.
Put another way, the plow improves our quantity of life, and lab-grown meat improves our quality of life. For most of human history, we’ve put most of our innovative capacity into the former. And our efforts have paid off: worldwide life expectancy rose from 34 years in 1913 to 60 in 1973 and has reached 71 today.
Because we’re living longer, our focus is starting to shift toward well-being. This transformation is happening slowly. If you divide scientific breakthroughs into these two categories—things that improve quantity of life and things that improve quality of life—the 2009 list looks not so different from this year’s. Like most forms of progress, the change is so gradual that it’s hard to perceive. It’s a matter of decades, not years—and I believe we’re only at the midpoint of the transition.
To be clear, I don’t think humanity will stop trying to extend life spans anytime soon. We’re still far from a world where everyone everywhere lives to old age in perfect health, and it’s going to take a lot of innovation to get us there. Plus, “quantity of life” and “quality of life” are not mutually exclusive. A malaria vaccine would both save lives and make life better for children who might otherwise have been left with developmental delays from the disease.
We’ve reached a point where we’re tackling both ideas at once, and that’s what makes this moment in history so interesting. If I had to predict what this list will look like a few years from now, I’d bet technologies that alleviate chronic disease will be a big theme. This won’t just include new drugs (although I would love to see new treatments for diseases like Alzheimer’s on the list). The innovations might look like a mechanical glove that helps a person with arthritis maintain flexibility, or an app that connects people experiencing major depressive episodes with the help they need.
If we could look even further out—let’s say the list 20 years from now—I would hope to see technologies that center almost entirely on well-being. I think the brilliant minds of the future will focus on more metaphysical questions: How do we make people happier? How do we create meaningful connections? How do we help everyone live a fulfilling life?
I would love to see these questions shape the 2039 list, because it would mean that we’ve successfully fought back disease (and dealt with climate change). I can’t imagine a greater sign of progress than that. For now, though, the innovations driving change are a mix of things that extend life and things that make it better. My picks reflect both. Each one gives me a different reason to be optimistic for the future, and I hope they inspire you, too.
My selections include amazing new tools that will one day save lives, from simple blood tests that predict premature birth to toilets that destroy deadly pathogens. I’m equally excited by how other technologies on the list will improve our lives. Wearable health monitors like the wrist-based ECG will warn heart patients of impending problems, while others let diabetics not only track glucose levels but manage their disease. Advanced nuclear reactors could provide carbon-free, safe, secure energy to the world.
One of my choices even offers us a peek at a future where society’s primary goal is personal fulfillment. Among many other applications, AI-driven personal agents might one day make your e-mail in-box more manageable—something that sounds trivial until you consider what possibilities open up when you have more free time.
The 30 minutes you used to spend reading e-mail could be spent doing other things. I know some people would use that time to get more work done—but I hope most would use it for pursuits like connecting with a friend over coffee, helping your child with homework, or even volunteering in your community.
Over the last year, the media have published story after story after story about the declining price of solar panels and wind turbines.
People who read these stories are understandably left with the impression that the more solar and wind energy we produce, the lower electricity prices will become.
And yet that’s not what’s happening. In fact, it’s the opposite.
What gives? If solar panels and wind turbines became so much cheaper, why did the price of electricity rise instead of decline?
Electricity prices increased by 51 percent in Germany during its expansion of solar and wind energy. EP
One hypothesis might be that while electricity from solar and wind became cheaper, other energy sources like coal, nuclear, and natural gas became more expensive, eliminating any savings, and raising the overall price of electricity.
The price of nuclear and coal in those place during the same period was mostly flat.
Electricity prices increased 24 percent in California during its solar energy build-out from 2011 to 2017. EP
Another hypothesis might be that the closure of nuclear plants resulted in higher energy prices.
Evidence for this hypothesis comes from the fact that nuclear energy leaders Illinois, France, Sweden and South Korea enjoy some of the cheapest electricity in the world.
Since 2010, California closed one nuclear plant (2,140 MW installed capacity) while Germany closed 5 nuclear plants and 4 other reactors at currently-operating plants (10,980 MW in total).
Electricity in Illinois is 42 percent cheaper than electricity in California while electricity in France is 45 percent cheaper than electricity in Germany.
But this hypothesis is undermined by the fact that the price of the main replacement fuels, natural gas and coal, remained low, despite increased demand for those two fuels in California and Germany.
That leaves us with solar and wind as the key suspects behind higher electricity prices. But why would cheaper solar panels and wind turbines make electricity more expensive?
The main reason appears to have been predicted by a young German economist in 2013.
In a paper for Energy Policy, Leon Hirth estimated that the economic value of wind and solar would decline significantly as they become a larger part of electricity supply.
The reason? Their fundamentally unreliable nature. Both solar and wind produce too much energy when societies don’t need it, and not enough when they do.
Solar and wind thus require that natural gas plants, hydro-electric dams, batteries or some other form of reliable power be ready at a moment’s notice to start churning out electricity when the wind stops blowing and the sun stops shining.
And unreliability requires solar- and/or wind-heavy places like Germany, California and Denmark to pay neighboring nations or states to take their solar and wind energy when they are producing too much of it.
Hirth predicted that the economic value of wind on the European grid would decline 40 percent once it becomes 30 percent of electricity while the value of solar would drop by 50 percent when it got to just 15 percent.
Hirth predicted that the economic value of wind would decline 40% once it reached 30% of electricity, and that the value of solar would drop by 50% when it reached 15% of electricity. EP
In 2017, the share of electricity coming from wind and solar was 53 percent in Denmark, 26 percent in Germany, and 23 percent in California. Denmark and Germany have the first and second most expensive electricity in Europe.
By reporting on the declining costs of solar panels and wind turbines but not on how they increase electricity prices, journalists are — intentionally or unintentionally — misleading policymakers and the public about those two technologies.
The Los Angeles Times last year reported that California’s electricity prices were rising, but failed to connect the price rise to renewables, provoking a sharp rebuttal from UC Berkeley economist James Bushnell.
“The story of how California’s electric system got to its current state is a long and gory one,” Bushnell wrote, but “the dominant policy driver in the electricity sector has unquestionably been a focus on developing renewable sources of electricity generation.”
Part of the problem is that many reporters don’t understand electricity. They think of electricity as a commodity when it is, in fact, a service — like eating at a restaurant.
“The price we pay for the luxury of eating out isn’t just the cost of the ingredients most of which which, like solar panels and wind turbines, have declined for decades.
Rather, the price of services like eating out and electricity reflect the cost not only of a few ingredients but also their preparation and delivery.
This is a problem of bias, not just energy illiteracy. Normally skeptical journalists routinely give renewables a pass.
The reason isn’t because they don’t know how to report critically on energy — they do regularly when it comes to non-renewable energy sources — but rather because they don’t want to.”
That could — and should — change. Reporters have an obligation to report accurately and fairly on all issues they cover, especially ones as important as energy and the environment.
A good start would be for them to investigate why, if solar and wind are so cheap, they are making electricity so expensive.
The Fourth Industrial Revolution has made for big strides in manufacturing, especially with the additions of robotics and 3D printing. But one field has been advancing the notion of thinking small. Nanotechnology, or the study and application of manipulating matter at the nanoscale, has uncovered the existence of a world that’s a thousand times smaller than a fly’s eye. It has also led to the development of materials and techniques that have enhanced production capabilities.
Nanotechnology continues to have a broad impact on different sectors. In fact, the worldwide market will likely exceed $125 billion by 2024.Ranging from stain-resistant fabric to more affordable solar cells, nanotechnology applications have been improving our daily lives. As research continues, advances in this space are opening up possibilities for more promising innovations.
A Closer Look at the Nanoscale
In the metric system, “nano” means a factor of one billionth—which means that a nanometer (nm) is at one-billionth of a meter. Forms of matter at the nanoscale usually have lengths from the atomic level of around 0.1 nm up to 100 nm.
What makes the nanoscale extraordinary is that the properties and characteristics of matter are different on this level. Some materials can become more efficient at conducting electricity or heat. Others reflect light better. There are also materials that become stronger. The list goes on. For instance, the metal copper on the nanoscale is transparent. Gold, which is normally unreactive, becomes chemically active. Carbon, which is soft in its usual form, becomes incredibly hard when packed into a nanoscopic arrangement called a “nanotube”. These characteristics are crucial for numerous nanotechnology applications.
Dr. K. Eric Drexler weighs in on the uses of nanotechnology and on understanding where nanotechnology will lead.
The reason why chemical properties alter in the nanoscale is that it’s easier for particles to move around and between one another. Additionally, gravity becomes much less important than the electromagnetic forces between atoms and molecules. Thermal vibrations also become extremely significant. In short, the rules of science are very different at the nanoscale. It’s one of the factors that make nanotechnology research and nanotechnology applications so fascinating.
Creating lighter, sturdier and safer materials are possible with nanotechnology. Many of those materials can also withstand great pressures and weights. Nanomaterials, or structures in the nanoscale, enable the advanced manufacturing of innovative, next-generation products that provide higher performance at a lower cost and improved sustainability.
Exploring the Nanotech Space, One Atom at a Time
A few well-known companies have been exploring the substantial profit potential of nanotechnology applications.
IBM has invested more than $3 billion for the development of semiconductors that will be seven nanometers or less. The company has also been exploring new nanomanufacturing techniques. Additionally, IBM holds the distinction of producing the world’s smallest and fastest graphene chip.
Uses of nanotechnology in relation to metal-organic frameworks (MOFs) have cost-advantage production economics.
Samsung has also been active in nanotechnology research. The electronics giant has filed more than 400 patents related to graphene. Such patents involve manufacturing processes and touch screens, among other nanotechnology applications. Moreover, Samsung has funded an effort to develop its first generation graphene batteries.
One of the notable startups that has been gaining traction in this space is NuMat Technologies. The company creates intelligently engineered systems through the integration of programmable nanomaterials. NuMat is also the first company in the world to commercialize products enabled by metal-organic frameworks (MOFs). These are nanomaterials with vast surface areas, highly tunable porosities, and near-infinite combinatorial possibilities. Nanotechnology applications of MOFs involve products with improved performance and otherwise-unachievable flexibility in form factors. Additionally, they have cost-advantage production economics.
Founder Benjamin Hernandez believes that one of the most important uses of nanotechnology is solving challenges related to sustainability.
“I think conceptually that’s kind of the wave of the future, using atomic-scale machines or engineering to solve complex macro problems,” Hernandez said.
Moreover, NuMat uses artificial intelligence to design MOFs. The company has total funding of $22.3 million so far. NuMat continues extensive research to develop more nanotechnology applications for the future.
For something so small, it’s understandable that few fully grasp the uses of nanotechnology.
Making a Difference with Nanotechnology Research
The ones mentioned above are just a few of the thousand uses of nanotechnology. Achievements in the field seem to be announced almost daily. However, businesses must also place greater importance on using nanotechnology for more sustainable manufacturing. After all, advantages include reduced consumption of raw materials. Another benefit is the substitution of more abundant or less toxic materials than the ones presently used. Moreover, nanotechnology applications can lead to the development of cleaner and less wasteful manufacturing processes.
Professor Sijie Lin at Tongji University is optimistic about the prevalence of sustainability in nanotechnology applications.
“Designing safer nanomaterials and nanostructures has gained increasing attention in the field of nanoscience and technology in recent years,” Lin said. “Based on the body of experimental evidence contributed by environmental health and safety studies, materials scientists now have a better grasp on the relationships between the nanomaterials’ physicochemical characteristics and their hazard and safety profiles.”
According to Markus Antonietti, director of Max Planck Institute for Colloids and Interfaces at Max Planck Institute for Evolutionary Biology, more work needs to be done in increasing awareness on nanotechnology applications or uses of nanotechnology. “But there also needs to be a focus on education and getting information to the public at large,” he noted. “The best part is that all of this could happen immediately if we simply spread the information in an understandable way. People don’t read science journals, so they don’t even know that all of this is possible.”
Article Re-Posted from Bold Business
For more on Bold Business’ examination of the Fourth Industrial Revolution, check out these stories on 3D Printing and Supply-Chain Automation.
Bloomberg New Energy Finance finds the long-term costs of multi-hour energy storage can compete with natural gas and coal in an increasing number of markets today.
The long-term cost of supplying grid electricity from today’s lithium-ion batteries is falling even faster than expected, making them an increasingly cost-competitive alternative to natural-gas-fired power plants across a number of key energy markets.
That’s the key finding from aTuesday reportfrom Bloomberg New Energy Finance on the levelized cost of energy (LCOE) — the cost of a technology delivering energy over its lifespan — for a number of key clean energy technologies worldwide.
According to its analysis of public and proprietary data from more than 7,000 projects worldwide, this benchmark LCOE for lithium-ion batteries has fallen by 35 percent, to $187 per megawatt-hour, since the first half of 2018. This precipitous decline has outpaced the continuing slide in LCOE for solar PV and onshore and offshore wind power.
Over the past year, offshore wind saw a 24 percent decline in LCOE to fall below $100 per megawatt-hour, compared to about $220 per megawatt-hour only five years ago.
The benchmark LCOE for onshore wind and solar PV fell by 10 percent and 18 percent, respectively, to reach $50 and $57 per megawatt-hour for projects starting construction in early 2019.
To be sure, these generation technologies are still far cheaper than batteries in terms of their LCOEs — and that’s not mentioning the fact that they actually make electricity, rather than simply storing it for later use. To convert a battery’s storage capacity into a LCOE figure, the report models a utility-scale battery installation running daily cycles, with charging costs assumed to be at 60 percent of the wholesale base power price for the country in question.
Even so, the pace of the decline in battery LCOE, particularly for multi-hour storage applications that previous generations of lithium-ion technologies have struggled to provide, is startling, BNEF notes. Since 2012, the benchmark LCOE of lithium-ion batteries configured to supply four hours of grid power — a standard requirement for many grid services — has fallen by 74 percent, as extrapolated from historical data.
In comparison, the LCOE per megawatt-hour for onshore wind, solar PV and offshore wind has fallen by 49 percent, 84 percent and 56 percent, respectively, since 2010.
In fact, the LCOE for multi-hour lithium-ion batteries is falling to the point that “batteries co-located with solar or wind projects are starting to compete, in many markets and without subsidy, with coal- and gas-fired generation for the provision of ‘dispatchable power’ that can be delivered whenever the grid needs it (as opposed to only when the wind is blowing, or the sun is shining),” the report notes.
These findings match those we’ve been covering from our own analysts at Wood Mackenzie Power & Renewables, as well as from the broader industry. In the past year and a half, several large-scale solar-battery requests for proposals have set record-low prices, including Xcel Energy in Colorado with solar-plus-storage bids as low as $36 per megawatt-hour, compared to $25 per megawatt-hour for standalone solar, and NV Energy reporting even lower bids in its solar and solar-plus-storage RFPs.
These price points equate to about a $6 to $7 per megawatt-hour premium for solar projects that are partially “dispatchable” in the manner of a traditional power plant, compared to standalone solar, Ravi Manghani, WoodMac energy storage research director, reported at Greentech Media’s Energy Storage Summit in December.
Just this week, clean energy advocacy and research organization Energy Innovation and Vibrant Clean Energy released a report finding that the LCOE of new renewables in the U.S. is lower than that of nearly three-quarters of the U.S. coal fleet — a not completely surprising finding, given the coal power industry’s well-documented challenges in competing with cheap natural gas, and increasingly cheap wind and solar power.
At the same time, it’s worth noting that the current trends in pricing for lithium-ion batteries, what they actually cost today, has been mixed. While continuing technology improvements and increasing scale of manufacturing have continued to push down prices, these have been somewhat counterbalanced in the past year or so by a bottleneck in available supply, driven by a boom in demand from big projects in the U.S. and South Korea.
WoodMac discovered that battery rack prices fell by only about 6 percent from 2017 to 2018, rather than the 14 percent range previously predicted, based on thesesupply shortage challenges.
In these tumor cells, acidic regions are labeled in red. Invasive regions of the cells, which express a protein called MMP14, are labeled in green. Image: Nazanin Rohani
Acidic environment triggers genes that help cancer cells metastasize.
Scientists have long known that tumors have many pockets of high acidity, usually found deep within the tumor where little oxygen is available. However, a new study from MIT researchers has found that tumor surfaces are also highly acidic, and that this acidity helps tumors to become more invasive and metastatic.
The study found that the acidic environment helps tumor cells to produce proteins that make them more aggressive. The researchers also showed that they could reverse this process in mice by making the tumor environment less acidic.
“Our findings reinforce the view that tumor acidification is an important driver of aggressive tumor phenotypes, and it indicates that methods that target this acidity could be of value therapeutically,” says Frank Gertler, an MIT professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.
Former MIT postdoc Nazanin Rohani is the lead author of the study, which appears in the journal Cancer Research.
Mapping acidity
Scientists usually attribute a tumor’s high acidity to the lack of oxygen, or hypoxia, that often occurs in tumors because they don’t have an adequate blood supply. However, until now, it has been difficult to precisely map tumor acidity and determine whether it overlaps with hypoxic regions.
In this study, the MIT team used a probe called pH (Low) Insertion Peptide (pHLIP), originally developed by researchers at the University of Rhode Island, to map the acidic regions of breast tumors in mice. This peptide is floppy at normal pH but becomes more stable at low, acidic pH. When this happens, the peptide can insert itself into cell membranes. This allows the researchers to determine which cells have been exposed to acidic conditions, by identifying cells that have been tagged with the peptide.
To their surprise, the researchers found that not only were cells in the oxygen-deprived interior of the tumor acidic, there were also acidic regions at the boundary of the tumor and the structural tissue that surrounds it, known as the stroma.
“There was a great deal of tumor tissue that did not have any hallmarks of hypoxia that was quite clearly exposed to acidosis,” Gertler says. “We started looking at that, and we realized hypoxia probably wouldn’t explain the majority of regions of the tumor that were acidic.”
A new study explores how an acidic environment drives tumor spread.
Further investigation revealed that many of the cells at the tumor surface had shifted to a type of cell metabolism known as aerobic glycolysis. This process generates lactic acid as a byproduct, which could account for the high acidity, Gertler says. The researchers also discovered that in these acidic regions, cells had turned on gene expression programs associated with invasion and metastasis. Nearly 3,000 genes showed pH-dependent changes in activity, and close to 300 displayed changes in how the genes are assembled, or spliced.
“Tumor acidosis gives rise to the expression of molecules involved in cell invasion and migration. This reprogramming, which is an intracellular response to a drop in extracellular pH, gives the cancer cells the ability to survive under low-pH conditions and proliferate,” Rohani says.
Those activated genes include Mena, which codes for a protein that normally plays a key role in embryonic development. Gertler’s lab had previously discovered that in some tumors, Mena is spliced differently, producing an alternative form of the protein known as MenaINV (invasive). This protein helps cells to migrate into blood vessels and spread though the body.
Another key protein that undergoes alternative splicing in acidic conditions is CD44, which also helps tumor cells to become more aggressive and break through the extracellular tissues that normally surround them. This study marks the first time that acidity has been shown to trigger alternative splicing for these two genes.
Reducing acidity
The researchers then decided to study how these genes would respond to decreasing the acidity of the tumor microenvironment. To do that, they added sodium bicarbonate to the mice’s drinking water. This treatment reduced tumor acidity and shifted gene expression closer to the normal state. In other studies, sodium bicarbonate has also been shown to reduce metastasis in mouse models.
Sodium bicarbonate would not be a feasible cancer treatment because it is not well-tolerated by humans, but other approaches that lower acidity could be worth exploring, Gertler says. The expression of new alternative splicing genes in response to the acidic microenvironment of the tumor helps cells survive, so this phenomenon could be exploited to reverse those programs and perturb tumor growth and potentially metastasis.
“Other methods that would more focally target acidification could be of great value,” he says.
The research was funded by the Koch Institute Support (core) Grant from the National Cancer Institute, the Howard Hughes Medical Institute, the National Institutes of Health, the KI Quinquennial Cancer Research Fellowship, and MIT’s Undergraduate Research Opportunities Program.
Other authors of the paper include Liangliang Hao, a former MIT postdoc; Maria Alexis and Konstantin Krismer, MIT graduate students; Brian Joughin, a lead research modeler at the Koch Institute; Mira Moufarrej, a recent graduate of MIT; Anthony Soltis, a recent MIT PhD recipient; Douglas Lauffenburger, head of MIT’s Department of Biological Engineering; Michael Yaffe, a David H. Koch Professor of Science; Christopher Burge, an MIT professor of biology; and Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science.
Researchers at Swinburne, the University of Sydney and Australian National University have collaborated to develop a solar absorbing, ultra-thin graphene-based film with unique properties that has great potential for use in solar thermal energy harvesting.
The 90 nanometre material is said to be a 1000 times finer than a human hair and is able to rapidly heat up to 160°C under natural sunlight in an open environment.
The team stated that this new graphene-based material may also open new avenues in:
thermophotovoltaics (the direct conversion of heat to electricity)
It could possibly lead to the development of ‘invisible cloaking technology’ through developing large-scale thin films enclosing the objects to be ‘hidden’.
The researchers have developed a 2.5cm x 5cm working prototype to demonstrate the photo-thermal performance of the graphene-based metamaterial absorber. They have also proposed a scalable manufacturing strategy to fabricate the proposed graphene-based absorber at low cost.
“This is among many graphene innovations in our group,” says Professor Baohua Jia, Research Leader, Nanophotonic Solar Technology, in Swinburne’s Center for Micro-Photonics.
“In this work, the reduced graphene oxide layer and grating structures were coated with a solution and fabricated by a laser nanofabrication method, respectively, which are both scalable and low cost.”
“Our cost-effective and scalable graphene absorber is promising for integrated, large-scale applications that require polarisation-independent, angle insensitive and broad bandwidth absorption, such as energy-harvesting, thermal emitters, optical interconnects, photodetectors and optical modulators,” says first author of this research paper, Dr Han Lin, Senior Research Fellow in Swinburne’s Center for Micro-Photonics.
“Fabrication on a flexible substrate and the robustness stemming from graphene make it suitable for industrial use,” Dr Keng-Te Lin, another author, added.
“The physical effect causing this outstanding absorption in such a thin layer is quite general and thereby opens up a lot of exciting applications,” says Dr Bjorn Sturmberg, who completed his PhD in physics at the University of Sydney in 2016 and now holds a position at the Australian National University.
“The result shows what can be achieved through collaboration between different universities, in this case with the University of Sydney and Swinburne, each bringing in their own expertise to discover new science and applications for our science,” says Professor Martijn de Sterke, Director of the Institute of Photonics and Optical Science.
“Through our collaboration we came up with a very innovative and successful result. We have essentially developed a new class of optical material, the properties of which can be tuned for multiple uses.”
While EVs have come a long way — evenFord is making electric trucks— they’re still a far cry from perfect. One of the biggest complaints is that the batteries need to be plugged in and recharged, and even when they’re charged, they have a limited range.Fuel cell electric vehiclesoffer an alternative.
Their “battery” — actually a hydrogen/oxygen fuel cell — can be replenished with hydrogen gas. The biggest problem to-date has been thatproducing hydrogenisn’t an environmentally friendly process. We would also need the infrastructure to refuel with hydrogen. But, new technology from UMass Lowell could remove those barriers.
Researchers there have created a way to produce hydrogen on demand using water, carbon dioxide and cobalt. Theoretically, that would go directly into a fuel cell, where it would mix with oxygen to generate electricity and water. The electricity would then power the EV’s motor, rechargeable battery and headlights.
According to UMass Lowell, the hydrogen produced is 95 percent pure, and vehicles would not need to be refueled at a filling station. Instead, owners would replace canisters of the cobalt metal which would fuel the hydrogen generator.
Because the technology can produce hydrogen at low temperatures and pressures and because excess isn’t stored in the vehicle, it minimizes the risk of fire or explosion. While this isn’t a practical application yet, it could help make FCEVs a viable option.
In a statement from UMass Lowell’s Chemistry Department Chairman Professor David Ryan below said that vehicles would not be refueled at a fueling station.
The system that we have devised would not require the vehicle to be refueled at a hydrogen filling station.
Our technology would use canisters of the cobalt metal as the fuel to operate the hydrogen generator.
The canisters would be swapped out when expended. It’s really too early to tell, but the goal is typically to be able to travel up to 350 to 400 miles for most vehicles before “refueling.”
The panel, moderated by CARS Executive Director Stephen Zoepf, features companies that seek to catalyze electrification of transport, each focused on a different sector of the market.
From an all-electric chassis to electric mobility services at scale to fast & portable electric chargers to electric, highly-utilized AVs, this Energy Seminar will highlight the cutting edge in electric mobility.
You must be logged in to post a comment.