Researchers Develop a universal DNA Nano-signature for early cancer detection – University of Queensland


Killer T cells surround cancer cell. Credit: NIH

Researchers from the University of Queensland’s Australian Institute for Bioengineering and Nanotechnology (AIBN) have discovered a unique nano-scaled DNA signature that appears to be common to all cancers.

Based on this discovery, the team has developed a  that enables  to be quickly and easily detected from any tissue type, e.g. blood or biopsy.

The study, which was supported by a grant from the National Breast Cancer Foundation and is published in the journal Nature Communications, reveals new insight about how epigenetic reprogramming in cancer regulates the physical and chemical properties of DNA and could lead to an entirely new approach to point-of-care diagnostics.

“Because cancer is an extremely complicated and variable disease, it has been difficult to find a simple signature common to all cancers, yet distinct from healthy ,” explains AIBN researcher Dr. Abu Sina.

To address this, Dr. Sina and Dr. Laura Carrascosa, who are working with Professor Matt Trau at AIBN, focussed on something called circulating free DNA.

Like healthy cells,  are always in the process of dying and renewing. When they die, they essentially explode and release their cargo, including DNA, which then circulates.

“There’s been a big hunt to find whether there is some distinct DNA signature that is just in the cancer and not in the rest of the body,” says Dr. Carrascosa.

So they examined epigenetic patterns on the genomes of cancer cells and healthy cells. In other words, they looked for patterns of molecules, called methyl groups, which decorate the DNA. These methyl groups are important to cell function because they serve as signals that control which genes are turned on and off at any given time.

In healthy cells, these methyl groups are spread out across the genome. However, the AIBN team discovered that the genome of a cancer cell is essentially barren except for intense clusters of methyl groups at very specific locations.

This unique signature—which they dubbed the cancer “methylscape”, for methylation landscape—appeared in every type of breast cancer they examined and appeared in other forms of cancer, too, including prostate cancer, colorectal cancer and lymphoma.

“Virtually every piece of cancerous DNA we examined had this highly predictable pattern,” says Professor Trau.

He says that if you think of a cell as a hard-drive, then the new findings suggest that cancer needs certain genetic programmes or apps in order to run.

“It seems to be a general feature for all cancer,” he says. “It’s a startling discovery.”

They also discovered that, when placed in solution, those intense clusters of  cause cancer DNA fragments to fold up into three-dimensional nanostructures that really like to stick to gold.

Taking advantage of this, the researchers designed an assay which uses gold nanoparticles that instantly change colour depending on whether or not these 3-D nanostructures of cancer DNA are present.

“This happens in one drop of fluid,” says Trau. “You can detect it by eye, it’s as simple as that.”

The technology has also been adapted for electrochemical systems, which allows inexpensive and portable detection that could eventually be performed using a mobile phone.

So far they’ve tested the new technology on 200 samples across different types of human cancers, and . In some cases, the accuracy of cancer detection runs as high as 90%.

“It works for tissue derived genomic DNA and blood derived circulating free DNA,” says Sina. “This new discovery could be a game-changer in the field of point of care cancer diagnostics.” It’s not perfect yet, but it’s a promising start and will only get better with time, says the team.

“We certainly don’t know yet whether it’s the Holy Grail or not for all cancer diagnostics,” says Trau, “but it looks really interesting as an incredibly simple universal marker of cancer, and as a very accessible and inexpensive technology that does not require complicated lab based equipment like DNA sequencing.”

More information: Abu Ali Ibn Sina et al, Epigenetically reprogrammed methylation landscape drives the DNA self-assembly and serves as a universal cancer biomarker, Nature Communications(2018).  DOI: 10.1038/s41467-018-07214-w

Provided by University of Queensland

Explore further: New cancer monitoring technology worth its weight in gold

RMIT – Study unlocks full potential of graphene ‘supermaterial’


Drs. Esrafilzadeh and Jalili working on 3D-printed graphene mesh in the lab.
Credit: RMIT University

New research reveals why the “supermaterial” graphene has not transformed electronics as promised, and shows how to double its performance and finally harness its extraordinary potential.

Graphene is the strongest material ever tested. It’s also flexible, transparent and conducts heat and electricity 10 times better than copper.

After graphene research won the Nobel Prize for Physics in 2010 it was hailed as a transformative material for flexible electronics, more powerful computer chips and solar panels, water filters and bio-sensors. But performance has been mixed and industry adoption slow.

Now a study published in Nature Communications identifies silicon contamination as the root cause of disappointing results and details how to produce higher performing, pure graphene.

The RMIT University team led by Dr Dorna Esrafilzadeh and Dr Rouhollah Ali Jalili inspected commercially-available graphene samples, atom by atom, with a state-of-art scanning transition electron microscope.

“We found high levels of silicon contamination in commercially available graphene, with massive impacts on the material’s performance,” Esrafilzadeh said.

Testing showed that silicon present in natural graphite, the raw material used to make graphene, was not being fully removed when processed.

“We believe this contamination is at the heart of many seemingly inconsistent reports on the properties of graphene and perhaps many other atomically thin two-dimensional (2D) materials ,” Esrafilzadeh said.

Graphene has not become the next big thing because of silicon impurities holding it back, RMIT researchers have said.

Graphene was billed as being transformative, but has so far failed to make a significant commercial impact, as have some similar 2D nanomaterials. Now we know why it has not been performing as promised, and what needs to be done to harness its full potential.”

The testing not only identified these impurities but also demonstrated the major influence they have on performance, with contaminated material performing up to 50% worse when tested as electrodes.

“This level of inconsistency may have stymied the emergence of major industry applications for graphene-based systems.

But it’s also preventing the development of regulatory frameworks governing the implementation of such layered nanomaterials, which are destined to become the backbone of next-generation devices,” she said.

The two-dimensional property of graphene sheeting, which is only one atom thick, makes it ideal for electricity storage and new sensor technologies that rely on high surface area.

This study reveals how that 2D property is also graphene’s Achilles’ heel, by making it so vulnerable to surface contamination, and underscores how important high purity graphite is for the production of more pure graphene.

Using pure graphene, researchers demonstrated how the material performed extraordinarily well when used to build a supercapacitator, a kind of super battery.

When tested, the device’s capacity to hold electrical charge was massive. In fact, it was the biggest capacity so far recorded for graphene and within sight of the material’s predicted theoretical capacity.

In collaboration with RMIT’s Centre for Advanced Materials and Industrial Chemistry, the team then used pure graphene to build a versatile humidity sensor with the highest sensitivity and the lowest limit of detection ever reported.

These findings constitute a vital milestone for the complete understanding of atomically thin two-dimensional materials and their successful integration within high performance commercial devices.

“We hope this research will help to unlock the exciting potential of these materials.”

Story Source:

Materials provided by RMIT University. Note: Content may be edited for style and length.


Journal Reference:

  1. Rouhollah Jalili, Dorna Esrafilzadeh, Seyed Hamed Aboutalebi, Ylias M. Sabri, Ahmad E. Kandjani, Suresh K. Bhargava, Enrico Della Gaspera, Thomas R. Gengenbach, Ashley Walker, Yunfeng Chao, Caiyun Wang, Hossein Alimadadi, David R. G. Mitchell, David L. Officer, Douglas R. MacFarlane, Gordon G. Wallace. Silicon as a ubiquitous contaminant in graphene derivatives with significant impact on device performance. Nature Communications, 2018; 9 (1) DOI: 10.1038/s41467-018-07396-3

Will Drexel’s Discovery Enable a Lithium-Sulfur ‘Battery (R)evolution’?


Lithium-sulfur batteries could be the energy storage devices of the future, if they can get past a chemical phenomenon that reduces their endurance. Drexel researchers have reported a method for making a sulfur cathode that could preserve the batteries’ exceptional performance. (Image from Drexel News)

Drexel’s College of Engineering reports that researchers and the industry are looking at Li-S batteries to eventually replace Li-ion batteries because a new chemistry that theoretically allows more energy to be packed into a single battery.

img_0808This improved capacity, on the order of 5-10 times that of Li-ion batteries, equates to a longer run time for batteries between charges.

However, the problem is that Li-S batteries have trouble maintaining their superiority beyond just a few recharge cycles. But a solution to that problem may have been found with new research.

The new approach, reported by in a recent edition of the American Chemical Society journal Applied Materials and Interfaces, shows that it can hold polysulfides in place, maintaining the battery’s impressive stamina, while reducing the overall weight and the time required to produce them.

“We have created freestanding porous titanium monoxide nanofiber mat as a cathode host material in lithium-sulfur batteries,” said Vibha Kalra, PhD, an associate professor in the College of Engineering who led the research.

img_0810

“This is a significant development because we have found that our titanium monoxide-sulfur cathode is both highly conductive and able to bind polysulfides via strong chemical interactions, which means it can augment the battery’s specific capacity while preserving its impressive performance through hundreds of cycles.

We can also demonstrate the complete elimination of binders and current collector on the cathode side that account for 30-50 percent of the electrode weight — and our method takes just seconds to create the sulfur cathode, when the current standard can take nearly half a day.”

img_0811

Please find the full report here: LINK
TiO Phase Stabilized into Free-Standing Nanofibers as Strong Polysulfide Immobilizer in Li-S Batteries: Evidence for Lewis Acid-Base Interactions
Arvinder Singh and Vibha Kalra

ACS Appl. Mater. Interfaces, Just Accepted Manuscript

DOI: 10.1021/acsami.8b11029

We report the stabilization of titanium monoxide (TiO) nanoparticles in nanofibers through electrospinning and carbothermal processes and their unique bi-functionality – high conductivity and ability to bind polysulfides – in Li-S batteries. The developed 3-D TiO/CNF architecture with the inherent inter-fiber macropores of nanofiber mats provides a much higher surface area (~427 m2 g-1) and overcomes the challenges associated with the use of highly dense powdered Ti-based suboxides/monoxide materials, thereby allowing for high active sulfur loading among other benefits.

The developed TiO/CNF-S cathodes exhibit high initial discharge capacities of ~1080 mAh g-1, ~975 mAh g-1, and ~791 mAh g-1 at 0.1C, 0.2C, and 0.5C rates, respectively with long term cycling. Furthermore, free-standing TiO/CNF-S cathodes developed with rapid sulfur melt infiltration (~5 sec) eradicate the need of inactive elements viz. binders, additional current collectors (Al-foil) and additives. Using postmortem XPS and Raman analysis, this study is the first to reveal the presence of strong Lewis acid-base interaction between TiO (3d2) and Sx2- through coordinate covalent Ti-S bond formation.

Our results highlight the importance of developing Ti-suboxides/monoxide based nanofibrous conducting polar host materials for next-generation Li-S batteries.

“Reprinted with permission from (DOI: 10.1021/acsami.8b11029). Copyright (2018) American Chemical Society.”

 

 

AI and Nanotechnology Team Up to bring Humans to the brink of IMMORTALITY, top scientist claims


IMMORTAL: Human beings could soon live forever 

HUMAN beings becoming immortal is a step closer following the launch of a new start-up.

Dr Ian Pearson has previously said people will have the ability to “not die” by 2050 – just over 30 years from now.

Two of the methods he said humans might use were “body part renewal” and linking bodies with machines so that people are living their lives through an android.

But after Dr Pearson’s predictions, immortality may now be a step nearer following the launch of a new start-up.

Human is hoping to make the immortality dream a reality with an ambitious plan.

Josh Bocanegra, the CEO of the company, said he is hoping to use Artificial Intelligence technology to create its own human being in the next three decades.

He said: “We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioural patterns, thought processes and information about how your body functions from the inside-out.

Watch

Live to 2050 and “Live Forever” Really?

“This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human.

“Using cloning technology, we will restore the brain as it matures.” 

Last year, UK-based stem cell bank StemProject said it could eventually potentially develop treatments that allow humans to live until 200.

Mark Hall, from StemProtect, said at the time: “In just the same way as we might replace a joint such as a hip with a specially made synthetic device, we can now replace cells in the body with new cells which are healthy and younger versions of the ones they’re replacing.

“That means we can replace diseased or ageing cells – and parts of the body – with entirely new ones which are completely natural and healthy.”

Watch Dr. Ian Pearson Talk About the Possibility of Immortality by 2050

How a ‘solar battery’ could bring electricity to rural areas – A ‘solar flow’ battery could “Harvest (energy) in the Daytime and Provide Electricity in the Evening


New solar flow battery with a 14.1 percent efficiency. Photo: David Tenenbaum, UW-Madison

Solar energy is becoming more and more popular as prices drop, yet a home powered by the Sun isn’t free from the grid because solar panels don’t store energy for later. Now, researchers have refined a device that can both harvest and store solar energy, and they hope it will one day bring electricity to rural and underdeveloped areas.

The problem of energy storage has led to many creative solutions, like giant batteries. For a paper published today in the journal Chem, scientists trying to improve the solar cells themselves developed an integrated battery that works in three different ways.

It can work like a normal solar cell by converting sunlight to electricity immediately, explains study author Song Jin, a chemist at the University of Wisconsin at Madison. It can store the solar energy, or it can simply be charged like a normal battery.

“IT COULD HARVEST IN THE DAYTIME, PROVIDE ELECTRICITY IN THE EVENING.”

It’s a combination of two existing technologies: solar cells that harvest light, and a so-called flow battery.

The most commonly used batteries, lithium-ion, store energy in solid materials, like various metals. Flow batteries, on the other hand, store energy in external liquid tanks.

What is A ‘Flow Battery’

This means they are very easy to scale for large projects. Scaling up all the components of a lithium-ion battery might throw off the engineering, but for flow batteries, “you just make the tank bigger,” says Timothy Cook, a University at Buffalo chemist and flow battery expert not involved in the study.

“You really simplify how to make the battery grow in capacity,” he adds. “We’re not making flow batteries to power a cell phone, we’re thinking about buildings or industrial sites.

Jin and his team were the first to combine the two features. They have been working on the battery for years, and have now reached 14.1 percent efficiency.

Jin calls this “round-trip efficiency” — as in, the efficiency from taking that energy, storing it, and discharging it. “We can probably get to 20 percent efficiency in the next few years, and I think 25 percent round-trip is not out of the question,” Jin says.

Apart from improving efficiency, Jin and his team want to develop a better design that can use cheaper materials.

The invention is still at proof-of-concept stage, but he thinks it could have a large impact in less-developed areas without power grids and proper infrastructure. “There, you could have a medium-scale device like this operate by itself,” he says. “It could harvest in the daytime, provide electricity in the evening.” In many areas, Jin adds, having electricity is a game changer, because it can help people be more connected or enable more clinics to be open and therefore improve health care.

And Cook notes that if the solar flow battery can be scaled, it can still be helpful in the US.

The United States might have plenty of power infrastructure, but with such a device, “you can disconnect and have personalized energy where you’re storing and using what you need locally,” he says. And that could help us be less dependent on forms of energy that harm the environment.

Read Genesis Nanotech News Online: Our Latest Edition


Genesis Nanotech News Online: Our Latest Edition with Articles Like –

Australian researchers design a rapid nano-filter that cleans dirty water 100X faster than current technology

Zombie Brain Cells Found in Mice

Energy Storage Technologies vie for Investmemt and Market Share

… AND …

Breakthrough Discovery: How groups of cells are able to build our tissues and organs while we are still embryos +

… 15 More Contributing Authors & Articles

Read Genesis Nanotech Online Here

#greatthingsfromsmallthings

Discovery: How groups of cells are able to build our tissues and organs while we are still embryos – Understanding ‘how’ may help us treat Cancer more effectively


 

stemcell-collage2-feature-1170x400

Ever wondered how groups of cells managed to build your tissues and organs while you were just an embryo?

Using state-of-the-art techniques he developed, UC Santa Barbara researcher Otger Campàs and his group have cracked this longstanding mystery, revealing the astonishing inner-workings of how embryos are physically constructed.

Not only does it bring a century-old hypothesis into the modern age, the study and its techniques provide the researchers a foundation to study other questions key to human health, such as how cancers form and spread or how to engineer organs.

“In a nutshell, we discovered a fundamental physical mechanism that cells use to mold embryonic tissues into their functional 3D shapes,” said Campàs, a professor of mechanical engineering in UCSB’s College of Engineering who holds the Duncan & Suzanne Mellichamp Chair in Systems Biology. His group investigates how living systems self organize to build the remarkable structures and shapes found in nature.

cell biology UC Santa B download

Cells coordinate by exchanging biochemical signals, but they also hold to and push on each other to build the body structures we need to live, such as the eyes, lungs and heart. And, as it turns out, sculpting the embryo is not far from glass molding or 3D printing. In their new work,”A fluid-to-solid jamming transition underlies vertebrate body axis elongation,” published in the journal Nature, Campàs and colleagues reveal that cell collectives switch from fluid to solid states in a controlled manner to build the vertebrate embryo, in a way similar to how we mold glass into vases or 3D print our favorite items. Or, if you like, we 3D print ourselves, from the inside.

Most objects begin as fluids. From metallic structures to gelatin desserts, their shape is made by pouring the molten original materials into molds, then cooling them to get the solid objects we use.

img_0735

A fluid-to-solid jamming transition underlies vertebrate body axis elongation

As in a Chihuly glass sculpture, made by carefully melting portions of glass to slowly reshape it into life, cells in certain regions of the embryo are more active and ‘melt’ the tissue into a fluid state that can be restructured. Once done, cells ‘cool down’ to settle the tissue shape, Campàs explained.

“The transition from fluid to solid tissue states that we observed is known in physics as ‘jamming’,” Campàs said. “Jamming transitions are a very general phenomena that happens when particles in disordered systems, such as foams, emulsions or glasses, are forced together or cooled down.”

This discovery was enabled by techniques previously developed by Campàs and his group to measure the forces between cells inside embryos, and also to exert miniscule forces on the cells as they build tissues and organs. Using zebrafish embryos, favored for their optical transparency but developing much like their human counterparts, the researchers placed tiny droplets of a specially engineered ferromagnetic fluid between the cells of the growing tissue.

The spherical droplets deform as the cells around them push and pull, allowing researchers to see the forces that cells apply on each other. And, by making these droplets magnetic, they also could exert tiny stresses on surrounding cells to see how the tissue would respond.

“We were able to measure physical quantities that couldn’t be measured before, due to the challenge of inserting miniaturized probes in tiny developing embryos,” said postdoctoral fellow Alessandro Mongera, who is the lead author of the paper.

“Zebrafish, like other vertebrates, start off from a largely shapeless bunch of cells and need to transform the body into an elongated shape, with the head at one end and tail at the other,” Campàs said.

UC Santa B II Lemaire

The physical reorganization of the cells behind this process had always been something of a mystery. Surprisingly, researchers found that the cell collectives making the tissue were physically like a foam (yes, as in beer froth) that jammed during development to ‘freeze’ the tissue architecture and set its shape.

These observations confirm a remarkable intuition made by Victorian-era Scottish mathematician D’Arcy Thompson 100 years ago in his seminal work “On Growth and Form.”

Darcy Thompson Ms48534_13Read About: D’Arcy Wentworth Thompson

“He was convinced that some of the physical mechanisms that give shapes to inert materials were also at play to shape living organisms. Remarkably, he compared groups of cells to foams and even the shaping of cells and tissues to glassblowing,” Campàs said. A century ago, there were no instruments that could directly test the ideas Thompson proposed, Campàs added, though Thompson’s work continues to be cited to this day.

The new Nature paper also provides a jumping-off point from which the Campàs Group researchers can begin to address other processes of embryonic development and related fields, such as how tumors physically invade surrounding tissues and how to engineer organs with specific 3D shapes.

“One of the hallmarks of cancer is the transition between two different tissue architectures. This transition can in principle be explained as an anomalous switch from a solid-like to a fluid-like tissue state,” Mongera explained. “The present study can help elucidate the mechanisms underlying this switch and highlight some of the potential druggable targets to hinder it.”

Alessandro Mongera, Payam Rowghanian, Hannah J. Gustafson, Elijah Shelton, David A. Kealhofer, Emmet K. Carn, Friedhelm Serwane, Adam A. Lucio, James Giammona & Otger Campàs

Nature (2018)

DOI: 10.1038%2Fs41586-018-0479-2

Australian scientists develop nanotechnology to purify water


Scientists in Australia have developed a ground-breaking new way to strip impurities from waste water, with the research set to have massive applications for a number of industries.

Scientists in Australia have developed a ground-breaking new way to strip impurities from waste water, with the research set to have massive applications for a number of industries.

By using a new type of crystalline alloy, researchers at Edith Cowan University (ECU) are able to extract the contaminants and pollutants that often end up in water during industrial processing.

“Mining and textile production produces huge amounts of waste water that is contaminated with heavy metals and dyes,” lead researcher Associate Professor Laichang Zhang from ECU’s School of Engineering technology said in a statement on Friday.

Although it is already possible to treat waste water with iron powder, according to Zhang, the cost is very high.

“Firstly, using iron powder leaves you with a large amount of iron sludge that must be stored and secondly it is expensive to produce and can only be used once,” he explained.

We can produce enough crystalline alloy to treat one tonne of waste water for just 15 Australian Dollars (10.8 US dollars), additionally, we can reuse the crystalline alloy up to five times while still maintaining its effectiveness.” Based on his previous work with “metal glass,” Zhang updated the nanotechnology to make it more effective.

“Whereas metallic glasses have a disordered atomic structure, the crystalline alloy we have developed has a more ordered atomic structure,” he said.

“We produced the crystalline alloy by heating metallic glass in a specific way.””This modifies the structure, allowing the electrons in the crystalline alloy to move more freely, thereby improving its ability to bind with dye molecules or heavy metals leaving behind usable water.”Zhang said he will continue to expand his research with industry partners to further improve the technology.

Forbes on Energy: Two Ways Energy Storage Will Be A True Market Disruptor In The U.S. Power Sector


Post written by

Eric Gimon

Eric Gimon is a Senior Fellow for Energy Innovation, and works on the firm’s America’s Power Plan project.

The term “market disruptor” is seemingly thrown around for every new technology with promise, but it will be quite prescient when it comes to energy storage and U.S. power markets.

New U.S. energy storage projects make solar power competitive against existing coal and new natural gas generation, and could soon displace these power market incumbents.  Meanwhile, projects in Australia and Germany show how energy storage can completely reshape power market economics and generate revenue in unexpected ways .

In part one of this series, we discussed the three ways energy storage can tap economic opportunities in U.S. organized power markets. Now in part two of the series, let’s explore how storage will disrupt power markets as more and more capacity comes online.

New projects in Colorado and Nevada embody “market disruption”

True market disruption happens when existing or incumbent technologies can only improve their performance or costs incrementally and industries focus on achieving those incremental improvements, while an entirely new technology enters the market with capabilities incumbents can’t dream of with exponentially falling costs incumbents can’t approach.

As energy storage continues getting cheaper, it will increasingly out-compete other resources and change the mix of resources that run the grid.  Recent contracts for new solar-plus-storage projects signed by Xcel Energy in Colorado and NV Energy in Nevada will allow solar production to extend past sunset and into the evening peak demand period, making it competitive against existing fossil fuel resources and new natural gas.

In fact, energy storage can increasingly replace inefficient (and often dirty) peaker plants and gas plants maintained for reliability.  This trend isn’t limited to utility-scale power plants – behind the meter (i.e., small-scale or residential) energy storage surged in Q2 2018, installing more capacity than front-of-meter storage for the first time.

U.S. energy storage deployment by quarter 2013-2018WOODS MACKENZIE POWER & RENEWABLES

Energy storage’s economic edge will accelerate in the future. Bloomberg New Energy Finance forecasts utility-scale battery system costs will fall from $700 per kilowatt-hour (KWh) in 2016 to less than $300/KWh in 2030, drawing $103 billion in investment, and doubling in market size six times by 2030.

Tesla’s Australian “Big Battery” shows how storage will upend the existing order

But energy storage won’t disrupt power markets simply because of its continued cost declines versus resources it could replace, but also because of its different deployment and dispatch characteristics.  It won’t merely replace peaker plants or substation upgrades, it will modify how other resources operate and are considered. This will require a change in regulations at all scales for the power grid, as well as in power market rules.

Consider the Hornsdale Power Reserve in South Australia, otherwise known as the “Tesla Big Battery.”  This 100 megawatt (MW)/129 megawatt-hour (MWh) project is the largest lithium-ion battery in the world.  Through South Australian government grants and payments, it contributes to grid stability and ancillary services (also known as “FCAS”) while allowing the associated Hornsdale Wind Farm owners to arbitrate energy prices.  A recent report from the Australian Energy Market Operator shows that in Q1 2018, the average arbitrage (price difference between charging and discharging) for this project was AUS $90.56/MWh.

This exemplifies “value staking” where the Hornsdale Power Reserve takes advantage of all three ways storage can earn revenue in organized markets with a hydrid compensation model under its single owner/operator (French company Neoen).  Hornsdale is already impacting FCAS prices in Australia, with prices tumbling 57% in Q1 2018 from Q4 2017.

AEMO frequency control ancillary services markets 2016-2018AUSTRALIAN ENERGY MARKET OPERATOR

Value stacking for reliability contracts plus market-based revenues (or “Storage as a Transmission Asset”) is also actively being debated by California’s CAISO market.

Because energy storage provides countless benefits at both the local and regional level, in ever-more overlapping combinations, it will create contentious debates and innumerable headaches for power market regulators in coming years.   In 2014, observers were treated to a family feud, as Luminant (generation utility) and TXU (retail power provider) argued against battery storage being installed by Oncor (poles-and-wires utility) for competitive reasons.  More recently, Luminant has argued against AEP building energy storage to relieve transmission bottlenecks to remote communities in southwest Texas because they are “tantamount to peak-shaving and will result in the distortion of competitive market signals.” In California, policy makers are struggling with how to adjust rate structures so behind-the-meter storage projects can meet the state’s emissions reduction goals tied to the subsidies they receive.

Meanwhile, batteries are being combined with more than transmission, wind, and solar projects.  In Germany, a recently closed coal-fired power station is being used simultaneously as a grid-tied storage facility and “live replacement parts store” for third-generation electric vehicle battery packs by Mercedes-Benz Energy.  German automotive supplier Bosch and utility EnBW have installed a storage battery at EnBW’s coal-fired Heilbronn plant to supply balancing power market when demand outstrips supply.

Today, inflexible coal plants often receive these type of “uplift” payments when they are committed by power markets to meet demand or for reliability reasons, but can only offer resources in much bigger chunks then economic dispatch would warrant.  This puts billions of dollars at stake the eastern U.S., where power market operator PJM is considering dramatic changes in rules to pay higher prices to these inflexible plants.  What if in the future, these plants might be required to install or sponsor a certain amount of energy storage capacity in order to set marginal power market prices?

Even today, hybrid combinations of storage and other resources are changing the game in subtle but important ways.  Mark Ahlstrom of the Energy Systems Integration Group recently outlined how FERC’s Order 841 allows all kinds of resources to change the way they interact with wholesale power markets, their participation model, in a unforeseen and unpredictable ways.  For example, the end-point of a point-to-point high-voltage DC transmission line could use a storage participation model to bid or offer into power markets.  Some demand response resources are already combining with storage today “to harness the better qualities of each resource, and allow customers to tap a broader range of cost-reduction and revenue-generating capabilities.”

A recent projection from The Brattle Group underscores this point, forecasting that Order 841 could make energy storage projects profitable from 7 GW/20 GWh, with up to 50 GW of energy storage projects “participating in grid-level energy, ancillary service, and capacity markets.”

Power market disruption is the only guarantee

Eventually the hybrid storage model may become a universal template for all resources, creating additional revenue through improved flexibility.  For example, a hybrid storage-natural gas plant could provide power reserves during a cold start – even if a gas plant was not running, reserve power can come from energy storage while the gas turbine fires up.

If fixed start times for some resources, which are constraints that are accepted facts of life today, could be eliminated by hybridizing with storage, then standard market design might start requiring or incentivizing such upgrades to reduce the mathematical complexity and improve the precision of the algorithms that dispatch power plants and set prices today.

As utility-scale batteries continue their relentless cost declines, it’s hard to completely imagine exactly what the future might hold but energy storage is guaranteed to disrupt power markets – meaning this sector warrants close attention from savvy investors.

The reality of quantum computing could be … just three years away


Quantum computing has moved out of the realm of theoretical physics and into the real world, but its potential and promise are still years away.

Onstage at TechCrunch Disrupt SF, a powerhouse in the world of quantum research and a young upstart in the field presented visions for the future of the industry that illustrated both how far the industry has come and how far the technology has to go.

For both Dario Gil, the chief operating officer of IBM Research and the company’s vice president of artificial intelligence and quantum computing, and Chad Rigetti, a former IBM researcher who founded Rigetti Computing and serves as its chief executive, the moment that a quantum computer will be able to perform operations better than a classical computer is only three years away.

“[It’s] generating a solution that is better, faster or cheaper than you can do otherwise,” said Rigetti. “Quantum computing has moved out of a field of research into now an engineering discipline and an engineering enterprise.”

Considering the more than 30 years that IBM has been researching the technology and the millions (or billions) that have been poured into developing it, even seeing an end of the road is a victory for researchers and technologists.

Achieving this goal, for all of the brainpower and research hours that have gone into it, is hardly academic.

The Chinese government is building a $10 billion National Laboratory for Quantum Information in Anhui province, which borders Shanghai and is slated to open in 2020. Meanwhile, the U.S. public research into quantum computing is running at around $200 million per year.

Source: Patin Informatics via Bloomberg News.

One of the reasons why governments, especially, are so interested in the technology is its potential to completely remake the cybersecurity landscape. Some technologists argue that quantum computers will have the potential to crack any type of encryption technology, opening up all of the networks in the world to potential hacking.

The quantum computing apocalypse is imminent

According to experts, quantum computers will be able to create breakthroughs in many of the most complicated data processing problems, leading to the development of new medicines, building molecular structures and doing analysis going far beyond the capabilities of today’s binary computers.

Of course, quantum computing is so much more than security. It will enable new ways of doing things we can’t even imagine because we have never had this much pure compute power. Think about artificial and machine learning or drug development; any type of operation that is compute-intensive could benefit from the exponential increase in compute power that quantum computing will bring.

Security may be the Holy Grail for governments, but both Rigetti and Gil say that the industrial chemical business will be the first place where the potentially radical transformation of a market will appear first.

What is quantum computing anyway?

To understand quantum computing it helps to understand the principles of the physics behind it.

As Gil explained onstage (and on our site), quantum computing depends on the principles of superposition, entanglement and interference.

A Turning Point For Quantum Computing

Quantum computing is moving from theory and experimentation into engineering and applications. But now that quantum computing is going mainstream, it is incumbent on businesses and governments to understand its potential, for universities to beef up their teaching programs in quantum computing and related subjects and for students to become aware of promising new career paths.

Superposition is the notion that physicists can observe multiple potential states of a particle. “If you a flip a coin it is one or two states,” said Gil. Meaning that there’s a single outcome that can be observed. But if someone were to spin a coin, they’d see a number of potential outcomes.

Once you’ve got one particle that’s being observed, you can add another and pair them thanks to a phenomenon called quantum entanglement. “If you have two coins where each one can be in superpositions and then you can have measurements can be taken” of the difference of both.

Finally, there’s interference, where the two particles can be manipulated by an outside force to change them and create different outcomes.

“In classical systems you have these bits of zeros and ones and the logical operations of the ands and the ors and the nots,” said Gil. “The classical computer is able to process the logical operations of bits expressed in zeros and ones.”

“In an algorithm you put the computer in a super positional state,” Gil continued. “You can take the amplitude and states and interfere them and the algorithm is the thing that interferes… I can have many, many states representing different pieces of information and then i can interfere with it to get these data.”

These operations are incredibly hard to sustain. In the early days of research into quantum computing the superconducting devices only had one nanosecond before a qubit transforms into a traditional bit of data. Those ranges have increased between 50 and 100 microseconds, which enabled IBM and Rigetti to open up their platforms to researchers and others to conduct experimentation (more on that later).

The physical quantum computer

As one can imagine, dealing with quantum particles is a delicate business. So the computing operations have to be carefully controlled. At the base of the machine is what basically amounts to a huge freezer that maintains a temperature in the device of 15 millikelvin — near absolute zero degrees and 180 times colder than the temperatures in interstellar space.

“These qubits are very delicate,” said Gil. “Anything from the outside world can couple to it and destroy its state and one way to protect it is to cool it.”

Wiring for the quantum computer is made of superconducting coaxial cables. The inputs to the computers are microwave pulses that manipulates the particles creating a signal that is then interpreted by the computers’ operators.

Those operators used to require a degree in quantum physics. But both IBM and Rigetti have been working on developing tools that can enable a relative newbie to use the tech.

Quantum computing in the “cloud”

Even as companies like IBM and Rigetti bring the cost of quantum computing down from tens of millions of dollars to roughly $1 million to $2 million, these tools likely will never become commodity hardware that a consumer buys to use as a personal computer.

Rather, as with most other computing these days, quantum computing power will be provided as a service to users.

Indeed, Rigetti announced onstage a new hybrid computing platform that can provide computing services to help the industry both reach quantum advantage — that tipping point at which quantum is commercially viable — and to enable industries to explore the technologies to acclimatize to the potential ways in which typical operations could be disrupted by it.

Rigetti announces its hybrid quantum computing platform — and a $1M prize

Rigetti, a quantum computing startup that is challenging the likes of IBM, Microsoft and Google in this nascent space, today at our TechCrunch Disrupt SF 2018 event announced the launch of its new hybrid quantum computing platform. While Rigetti already offered API access to its quantum computing platform, this new service, dubbed Quantum Cloud Services … Continue reading

“A user logs on to their own device and use our software development kit to write a quantum application,” said Rigetti. “That program is sent to a compiler and kicks off an optimization kit that runs on a quantum and classical computer… This is the architecture that’s needed to achieve quantum advantage.”

Both IBM and Rigetti — and a slew of other competitors — are preparing users for accessing quantum computing opportunities on the cloud.

IBM has more than a million chips performing millions of quantum operations requested by users in over 100 countries around the world.

“In a cloud-first era I’m not sure the economic forces will be there that will drive us to develop the miniaturized environment in the laptop,” Rigetti said. But the ramifications of the technology’s commercialization will be felt by everyone, everywhere.

“Quantum computing is going to change the world and it’s all going to come in our lifetime, whether that’s two years or five years,” he said. “Quantum computing is going to redefine every industry and touch every market. Every major company will be involved in some capacity in that space.”