AI and Nanotechnology Team Up to bring Humans to the brink of IMMORTALITY, top scientist claims


IMMORTAL: Human beings could soon live forever 

HUMAN beings becoming immortal is a step closer following the launch of a new start-up.

Dr Ian Pearson has previously said people will have the ability to “not die” by 2050 – just over 30 years from now.

Two of the methods he said humans might use were “body part renewal” and linking bodies with machines so that people are living their lives through an android.

But after Dr Pearson’s predictions, immortality may now be a step nearer following the launch of a new start-up.

Human is hoping to make the immortality dream a reality with an ambitious plan.

Josh Bocanegra, the CEO of the company, said he is hoping to use Artificial Intelligence technology to create its own human being in the next three decades.

He said: “We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioural patterns, thought processes and information about how your body functions from the inside-out.

Watch

Live to 2050 and “Live Forever” Really?

“This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human.

“Using cloning technology, we will restore the brain as it matures.” 

Last year, UK-based stem cell bank StemProject said it could eventually potentially develop treatments that allow humans to live until 200.

Mark Hall, from StemProtect, said at the time: “In just the same way as we might replace a joint such as a hip with a specially made synthetic device, we can now replace cells in the body with new cells which are healthy and younger versions of the ones they’re replacing.

“That means we can replace diseased or ageing cells – and parts of the body – with entirely new ones which are completely natural and healthy.”

Watch Dr. Ian Pearson Talk About the Possibility of Immortality by 2050

Advertisements

How a ‘solar battery’ could bring electricity to rural areas – A ‘solar flow’ battery could “Harvest (energy) in the Daytime and Provide Electricity in the Evening


New solar flow battery with a 14.1 percent efficiency. Photo: David Tenenbaum, UW-Madison

Solar energy is becoming more and more popular as prices drop, yet a home powered by the Sun isn’t free from the grid because solar panels don’t store energy for later. Now, researchers have refined a device that can both harvest and store solar energy, and they hope it will one day bring electricity to rural and underdeveloped areas.

The problem of energy storage has led to many creative solutions, like giant batteries. For a paper published today in the journal Chem, scientists trying to improve the solar cells themselves developed an integrated battery that works in three different ways.

It can work like a normal solar cell by converting sunlight to electricity immediately, explains study author Song Jin, a chemist at the University of Wisconsin at Madison. It can store the solar energy, or it can simply be charged like a normal battery.

“IT COULD HARVEST IN THE DAYTIME, PROVIDE ELECTRICITY IN THE EVENING.”

It’s a combination of two existing technologies: solar cells that harvest light, and a so-called flow battery.

The most commonly used batteries, lithium-ion, store energy in solid materials, like various metals. Flow batteries, on the other hand, store energy in external liquid tanks.

What is A ‘Flow Battery’

This means they are very easy to scale for large projects. Scaling up all the components of a lithium-ion battery might throw off the engineering, but for flow batteries, “you just make the tank bigger,” says Timothy Cook, a University at Buffalo chemist and flow battery expert not involved in the study.

“You really simplify how to make the battery grow in capacity,” he adds. “We’re not making flow batteries to power a cell phone, we’re thinking about buildings or industrial sites.

Jin and his team were the first to combine the two features. They have been working on the battery for years, and have now reached 14.1 percent efficiency.

Jin calls this “round-trip efficiency” — as in, the efficiency from taking that energy, storing it, and discharging it. “We can probably get to 20 percent efficiency in the next few years, and I think 25 percent round-trip is not out of the question,” Jin says.

Apart from improving efficiency, Jin and his team want to develop a better design that can use cheaper materials.

The invention is still at proof-of-concept stage, but he thinks it could have a large impact in less-developed areas without power grids and proper infrastructure. “There, you could have a medium-scale device like this operate by itself,” he says. “It could harvest in the daytime, provide electricity in the evening.” In many areas, Jin adds, having electricity is a game changer, because it can help people be more connected or enable more clinics to be open and therefore improve health care.

And Cook notes that if the solar flow battery can be scaled, it can still be helpful in the US.

The United States might have plenty of power infrastructure, but with such a device, “you can disconnect and have personalized energy where you’re storing and using what you need locally,” he says. And that could help us be less dependent on forms of energy that harm the environment.

Read Genesis Nanotech News Online: Our Latest Edition


Genesis Nanotech News Online: Our Latest Edition with Articles Like –

Australian researchers design a rapid nano-filter that cleans dirty water 100X faster than current technology

Zombie Brain Cells Found in Mice

Energy Storage Technologies vie for Investmemt and Market Share

… AND …

Breakthrough Discovery: How groups of cells are able to build our tissues and organs while we are still embryos +

… 15 More Contributing Authors & Articles

Read Genesis Nanotech Online Here

#greatthingsfromsmallthings

Discovery: How groups of cells are able to build our tissues and organs while we are still embryos – Understanding ‘how’ may help us treat Cancer more effectively


 

stemcell-collage2-feature-1170x400

Ever wondered how groups of cells managed to build your tissues and organs while you were just an embryo?

Using state-of-the-art techniques he developed, UC Santa Barbara researcher Otger Campàs and his group have cracked this longstanding mystery, revealing the astonishing inner-workings of how embryos are physically constructed.

Not only does it bring a century-old hypothesis into the modern age, the study and its techniques provide the researchers a foundation to study other questions key to human health, such as how cancers form and spread or how to engineer organs.

“In a nutshell, we discovered a fundamental physical mechanism that cells use to mold embryonic tissues into their functional 3D shapes,” said Campàs, a professor of mechanical engineering in UCSB’s College of Engineering who holds the Duncan & Suzanne Mellichamp Chair in Systems Biology. His group investigates how living systems self organize to build the remarkable structures and shapes found in nature.

cell biology UC Santa B download

Cells coordinate by exchanging biochemical signals, but they also hold to and push on each other to build the body structures we need to live, such as the eyes, lungs and heart. And, as it turns out, sculpting the embryo is not far from glass molding or 3D printing. In their new work,”A fluid-to-solid jamming transition underlies vertebrate body axis elongation,” published in the journal Nature, Campàs and colleagues reveal that cell collectives switch from fluid to solid states in a controlled manner to build the vertebrate embryo, in a way similar to how we mold glass into vases or 3D print our favorite items. Or, if you like, we 3D print ourselves, from the inside.

Most objects begin as fluids. From metallic structures to gelatin desserts, their shape is made by pouring the molten original materials into molds, then cooling them to get the solid objects we use.

img_0735

A fluid-to-solid jamming transition underlies vertebrate body axis elongation

As in a Chihuly glass sculpture, made by carefully melting portions of glass to slowly reshape it into life, cells in certain regions of the embryo are more active and ‘melt’ the tissue into a fluid state that can be restructured. Once done, cells ‘cool down’ to settle the tissue shape, Campàs explained.

“The transition from fluid to solid tissue states that we observed is known in physics as ‘jamming’,” Campàs said. “Jamming transitions are a very general phenomena that happens when particles in disordered systems, such as foams, emulsions or glasses, are forced together or cooled down.”

This discovery was enabled by techniques previously developed by Campàs and his group to measure the forces between cells inside embryos, and also to exert miniscule forces on the cells as they build tissues and organs. Using zebrafish embryos, favored for their optical transparency but developing much like their human counterparts, the researchers placed tiny droplets of a specially engineered ferromagnetic fluid between the cells of the growing tissue.

The spherical droplets deform as the cells around them push and pull, allowing researchers to see the forces that cells apply on each other. And, by making these droplets magnetic, they also could exert tiny stresses on surrounding cells to see how the tissue would respond.

“We were able to measure physical quantities that couldn’t be measured before, due to the challenge of inserting miniaturized probes in tiny developing embryos,” said postdoctoral fellow Alessandro Mongera, who is the lead author of the paper.

“Zebrafish, like other vertebrates, start off from a largely shapeless bunch of cells and need to transform the body into an elongated shape, with the head at one end and tail at the other,” Campàs said.

UC Santa B II Lemaire

The physical reorganization of the cells behind this process had always been something of a mystery. Surprisingly, researchers found that the cell collectives making the tissue were physically like a foam (yes, as in beer froth) that jammed during development to ‘freeze’ the tissue architecture and set its shape.

These observations confirm a remarkable intuition made by Victorian-era Scottish mathematician D’Arcy Thompson 100 years ago in his seminal work “On Growth and Form.”

Darcy Thompson Ms48534_13Read About: D’Arcy Wentworth Thompson

“He was convinced that some of the physical mechanisms that give shapes to inert materials were also at play to shape living organisms. Remarkably, he compared groups of cells to foams and even the shaping of cells and tissues to glassblowing,” Campàs said. A century ago, there were no instruments that could directly test the ideas Thompson proposed, Campàs added, though Thompson’s work continues to be cited to this day.

The new Nature paper also provides a jumping-off point from which the Campàs Group researchers can begin to address other processes of embryonic development and related fields, such as how tumors physically invade surrounding tissues and how to engineer organs with specific 3D shapes.

“One of the hallmarks of cancer is the transition between two different tissue architectures. This transition can in principle be explained as an anomalous switch from a solid-like to a fluid-like tissue state,” Mongera explained. “The present study can help elucidate the mechanisms underlying this switch and highlight some of the potential druggable targets to hinder it.”

Alessandro Mongera, Payam Rowghanian, Hannah J. Gustafson, Elijah Shelton, David A. Kealhofer, Emmet K. Carn, Friedhelm Serwane, Adam A. Lucio, James Giammona & Otger Campàs

Nature (2018)

DOI: 10.1038%2Fs41586-018-0479-2

Australian scientists develop nanotechnology to purify water


Scientists in Australia have developed a ground-breaking new way to strip impurities from waste water, with the research set to have massive applications for a number of industries.

Scientists in Australia have developed a ground-breaking new way to strip impurities from waste water, with the research set to have massive applications for a number of industries.

By using a new type of crystalline alloy, researchers at Edith Cowan University (ECU) are able to extract the contaminants and pollutants that often end up in water during industrial processing.

“Mining and textile production produces huge amounts of waste water that is contaminated with heavy metals and dyes,” lead researcher Associate Professor Laichang Zhang from ECU’s School of Engineering technology said in a statement on Friday.

Although it is already possible to treat waste water with iron powder, according to Zhang, the cost is very high.

“Firstly, using iron powder leaves you with a large amount of iron sludge that must be stored and secondly it is expensive to produce and can only be used once,” he explained.

We can produce enough crystalline alloy to treat one tonne of waste water for just 15 Australian Dollars (10.8 US dollars), additionally, we can reuse the crystalline alloy up to five times while still maintaining its effectiveness.” Based on his previous work with “metal glass,” Zhang updated the nanotechnology to make it more effective.

“Whereas metallic glasses have a disordered atomic structure, the crystalline alloy we have developed has a more ordered atomic structure,” he said.

“We produced the crystalline alloy by heating metallic glass in a specific way.””This modifies the structure, allowing the electrons in the crystalline alloy to move more freely, thereby improving its ability to bind with dye molecules or heavy metals leaving behind usable water.”Zhang said he will continue to expand his research with industry partners to further improve the technology.

Forbes on Energy: Two Ways Energy Storage Will Be A True Market Disruptor In The U.S. Power Sector


Post written by

Eric Gimon

Eric Gimon is a Senior Fellow for Energy Innovation, and works on the firm’s America’s Power Plan project.

The term “market disruptor” is seemingly thrown around for every new technology with promise, but it will be quite prescient when it comes to energy storage and U.S. power markets.

New U.S. energy storage projects make solar power competitive against existing coal and new natural gas generation, and could soon displace these power market incumbents.  Meanwhile, projects in Australia and Germany show how energy storage can completely reshape power market economics and generate revenue in unexpected ways .

In part one of this series, we discussed the three ways energy storage can tap economic opportunities in U.S. organized power markets. Now in part two of the series, let’s explore how storage will disrupt power markets as more and more capacity comes online.

New projects in Colorado and Nevada embody “market disruption”

True market disruption happens when existing or incumbent technologies can only improve their performance or costs incrementally and industries focus on achieving those incremental improvements, while an entirely new technology enters the market with capabilities incumbents can’t dream of with exponentially falling costs incumbents can’t approach.

As energy storage continues getting cheaper, it will increasingly out-compete other resources and change the mix of resources that run the grid.  Recent contracts for new solar-plus-storage projects signed by Xcel Energy in Colorado and NV Energy in Nevada will allow solar production to extend past sunset and into the evening peak demand period, making it competitive against existing fossil fuel resources and new natural gas.

In fact, energy storage can increasingly replace inefficient (and often dirty) peaker plants and gas plants maintained for reliability.  This trend isn’t limited to utility-scale power plants – behind the meter (i.e., small-scale or residential) energy storage surged in Q2 2018, installing more capacity than front-of-meter storage for the first time.

U.S. energy storage deployment by quarter 2013-2018WOODS MACKENZIE POWER & RENEWABLES

Energy storage’s economic edge will accelerate in the future. Bloomberg New Energy Finance forecasts utility-scale battery system costs will fall from $700 per kilowatt-hour (KWh) in 2016 to less than $300/KWh in 2030, drawing $103 billion in investment, and doubling in market size six times by 2030.

Tesla’s Australian “Big Battery” shows how storage will upend the existing order

But energy storage won’t disrupt power markets simply because of its continued cost declines versus resources it could replace, but also because of its different deployment and dispatch characteristics.  It won’t merely replace peaker plants or substation upgrades, it will modify how other resources operate and are considered. This will require a change in regulations at all scales for the power grid, as well as in power market rules.

Consider the Hornsdale Power Reserve in South Australia, otherwise known as the “Tesla Big Battery.”  This 100 megawatt (MW)/129 megawatt-hour (MWh) project is the largest lithium-ion battery in the world.  Through South Australian government grants and payments, it contributes to grid stability and ancillary services (also known as “FCAS”) while allowing the associated Hornsdale Wind Farm owners to arbitrate energy prices.  A recent report from the Australian Energy Market Operator shows that in Q1 2018, the average arbitrage (price difference between charging and discharging) for this project was AUS $90.56/MWh.

This exemplifies “value staking” where the Hornsdale Power Reserve takes advantage of all three ways storage can earn revenue in organized markets with a hydrid compensation model under its single owner/operator (French company Neoen).  Hornsdale is already impacting FCAS prices in Australia, with prices tumbling 57% in Q1 2018 from Q4 2017.

AEMO frequency control ancillary services markets 2016-2018AUSTRALIAN ENERGY MARKET OPERATOR

Value stacking for reliability contracts plus market-based revenues (or “Storage as a Transmission Asset”) is also actively being debated by California’s CAISO market.

Because energy storage provides countless benefits at both the local and regional level, in ever-more overlapping combinations, it will create contentious debates and innumerable headaches for power market regulators in coming years.   In 2014, observers were treated to a family feud, as Luminant (generation utility) and TXU (retail power provider) argued against battery storage being installed by Oncor (poles-and-wires utility) for competitive reasons.  More recently, Luminant has argued against AEP building energy storage to relieve transmission bottlenecks to remote communities in southwest Texas because they are “tantamount to peak-shaving and will result in the distortion of competitive market signals.” In California, policy makers are struggling with how to adjust rate structures so behind-the-meter storage projects can meet the state’s emissions reduction goals tied to the subsidies they receive.

Meanwhile, batteries are being combined with more than transmission, wind, and solar projects.  In Germany, a recently closed coal-fired power station is being used simultaneously as a grid-tied storage facility and “live replacement parts store” for third-generation electric vehicle battery packs by Mercedes-Benz Energy.  German automotive supplier Bosch and utility EnBW have installed a storage battery at EnBW’s coal-fired Heilbronn plant to supply balancing power market when demand outstrips supply.

Today, inflexible coal plants often receive these type of “uplift” payments when they are committed by power markets to meet demand or for reliability reasons, but can only offer resources in much bigger chunks then economic dispatch would warrant.  This puts billions of dollars at stake the eastern U.S., where power market operator PJM is considering dramatic changes in rules to pay higher prices to these inflexible plants.  What if in the future, these plants might be required to install or sponsor a certain amount of energy storage capacity in order to set marginal power market prices?

Even today, hybrid combinations of storage and other resources are changing the game in subtle but important ways.  Mark Ahlstrom of the Energy Systems Integration Group recently outlined how FERC’s Order 841 allows all kinds of resources to change the way they interact with wholesale power markets, their participation model, in a unforeseen and unpredictable ways.  For example, the end-point of a point-to-point high-voltage DC transmission line could use a storage participation model to bid or offer into power markets.  Some demand response resources are already combining with storage today “to harness the better qualities of each resource, and allow customers to tap a broader range of cost-reduction and revenue-generating capabilities.”

A recent projection from The Brattle Group underscores this point, forecasting that Order 841 could make energy storage projects profitable from 7 GW/20 GWh, with up to 50 GW of energy storage projects “participating in grid-level energy, ancillary service, and capacity markets.”

Power market disruption is the only guarantee

Eventually the hybrid storage model may become a universal template for all resources, creating additional revenue through improved flexibility.  For example, a hybrid storage-natural gas plant could provide power reserves during a cold start – even if a gas plant was not running, reserve power can come from energy storage while the gas turbine fires up.

If fixed start times for some resources, which are constraints that are accepted facts of life today, could be eliminated by hybridizing with storage, then standard market design might start requiring or incentivizing such upgrades to reduce the mathematical complexity and improve the precision of the algorithms that dispatch power plants and set prices today.

As utility-scale batteries continue their relentless cost declines, it’s hard to completely imagine exactly what the future might hold but energy storage is guaranteed to disrupt power markets – meaning this sector warrants close attention from savvy investors.

The reality of quantum computing could be … just three years away


Quantum computing has moved out of the realm of theoretical physics and into the real world, but its potential and promise are still years away.

Onstage at TechCrunch Disrupt SF, a powerhouse in the world of quantum research and a young upstart in the field presented visions for the future of the industry that illustrated both how far the industry has come and how far the technology has to go.

For both Dario Gil, the chief operating officer of IBM Research and the company’s vice president of artificial intelligence and quantum computing, and Chad Rigetti, a former IBM researcher who founded Rigetti Computing and serves as its chief executive, the moment that a quantum computer will be able to perform operations better than a classical computer is only three years away.

“[It’s] generating a solution that is better, faster or cheaper than you can do otherwise,” said Rigetti. “Quantum computing has moved out of a field of research into now an engineering discipline and an engineering enterprise.”

Considering the more than 30 years that IBM has been researching the technology and the millions (or billions) that have been poured into developing it, even seeing an end of the road is a victory for researchers and technologists.

Achieving this goal, for all of the brainpower and research hours that have gone into it, is hardly academic.

The Chinese government is building a $10 billion National Laboratory for Quantum Information in Anhui province, which borders Shanghai and is slated to open in 2020. Meanwhile, the U.S. public research into quantum computing is running at around $200 million per year.

Source: Patin Informatics via Bloomberg News.

One of the reasons why governments, especially, are so interested in the technology is its potential to completely remake the cybersecurity landscape. Some technologists argue that quantum computers will have the potential to crack any type of encryption technology, opening up all of the networks in the world to potential hacking.

The quantum computing apocalypse is imminent

According to experts, quantum computers will be able to create breakthroughs in many of the most complicated data processing problems, leading to the development of new medicines, building molecular structures and doing analysis going far beyond the capabilities of today’s binary computers.

Of course, quantum computing is so much more than security. It will enable new ways of doing things we can’t even imagine because we have never had this much pure compute power. Think about artificial and machine learning or drug development; any type of operation that is compute-intensive could benefit from the exponential increase in compute power that quantum computing will bring.

Security may be the Holy Grail for governments, but both Rigetti and Gil say that the industrial chemical business will be the first place where the potentially radical transformation of a market will appear first.

What is quantum computing anyway?

To understand quantum computing it helps to understand the principles of the physics behind it.

As Gil explained onstage (and on our site), quantum computing depends on the principles of superposition, entanglement and interference.

A Turning Point For Quantum Computing

Quantum computing is moving from theory and experimentation into engineering and applications. But now that quantum computing is going mainstream, it is incumbent on businesses and governments to understand its potential, for universities to beef up their teaching programs in quantum computing and related subjects and for students to become aware of promising new career paths.

Superposition is the notion that physicists can observe multiple potential states of a particle. “If you a flip a coin it is one or two states,” said Gil. Meaning that there’s a single outcome that can be observed. But if someone were to spin a coin, they’d see a number of potential outcomes.

Once you’ve got one particle that’s being observed, you can add another and pair them thanks to a phenomenon called quantum entanglement. “If you have two coins where each one can be in superpositions and then you can have measurements can be taken” of the difference of both.

Finally, there’s interference, where the two particles can be manipulated by an outside force to change them and create different outcomes.

“In classical systems you have these bits of zeros and ones and the logical operations of the ands and the ors and the nots,” said Gil. “The classical computer is able to process the logical operations of bits expressed in zeros and ones.”

“In an algorithm you put the computer in a super positional state,” Gil continued. “You can take the amplitude and states and interfere them and the algorithm is the thing that interferes… I can have many, many states representing different pieces of information and then i can interfere with it to get these data.”

These operations are incredibly hard to sustain. In the early days of research into quantum computing the superconducting devices only had one nanosecond before a qubit transforms into a traditional bit of data. Those ranges have increased between 50 and 100 microseconds, which enabled IBM and Rigetti to open up their platforms to researchers and others to conduct experimentation (more on that later).

The physical quantum computer

As one can imagine, dealing with quantum particles is a delicate business. So the computing operations have to be carefully controlled. At the base of the machine is what basically amounts to a huge freezer that maintains a temperature in the device of 15 millikelvin — near absolute zero degrees and 180 times colder than the temperatures in interstellar space.

“These qubits are very delicate,” said Gil. “Anything from the outside world can couple to it and destroy its state and one way to protect it is to cool it.”

Wiring for the quantum computer is made of superconducting coaxial cables. The inputs to the computers are microwave pulses that manipulates the particles creating a signal that is then interpreted by the computers’ operators.

Those operators used to require a degree in quantum physics. But both IBM and Rigetti have been working on developing tools that can enable a relative newbie to use the tech.

Quantum computing in the “cloud”

Even as companies like IBM and Rigetti bring the cost of quantum computing down from tens of millions of dollars to roughly $1 million to $2 million, these tools likely will never become commodity hardware that a consumer buys to use as a personal computer.

Rather, as with most other computing these days, quantum computing power will be provided as a service to users.

Indeed, Rigetti announced onstage a new hybrid computing platform that can provide computing services to help the industry both reach quantum advantage — that tipping point at which quantum is commercially viable — and to enable industries to explore the technologies to acclimatize to the potential ways in which typical operations could be disrupted by it.

Rigetti announces its hybrid quantum computing platform — and a $1M prize

Rigetti, a quantum computing startup that is challenging the likes of IBM, Microsoft and Google in this nascent space, today at our TechCrunch Disrupt SF 2018 event announced the launch of its new hybrid quantum computing platform. While Rigetti already offered API access to its quantum computing platform, this new service, dubbed Quantum Cloud Services … Continue reading

“A user logs on to their own device and use our software development kit to write a quantum application,” said Rigetti. “That program is sent to a compiler and kicks off an optimization kit that runs on a quantum and classical computer… This is the architecture that’s needed to achieve quantum advantage.”

Both IBM and Rigetti — and a slew of other competitors — are preparing users for accessing quantum computing opportunities on the cloud.

IBM has more than a million chips performing millions of quantum operations requested by users in over 100 countries around the world.

“In a cloud-first era I’m not sure the economic forces will be there that will drive us to develop the miniaturized environment in the laptop,” Rigetti said. But the ramifications of the technology’s commercialization will be felt by everyone, everywhere.

“Quantum computing is going to change the world and it’s all going to come in our lifetime, whether that’s two years or five years,” he said. “Quantum computing is going to redefine every industry and touch every market. Every major company will be involved in some capacity in that space.”

Sodium-ion Batteries Could Get Better Thanks to Graphene and Lasers


You hear a lot about the shortcomings of lithium-ion batteries, mostly related to the slow rate of capacity improvements. However, they’re also pretty expensive because of the required lithium for cathodes. Sodium-ion batteries have shown some promise as a vastly cheaper alternative, but the performance hasn’t been comparable. With the aid of lasers and graphene, researchers may have developed a new type of sodium-ion battery that works better and could reduce the cost of battery technology by an order of magnitude.

The research comes from King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Much of the country’s water comes from desalination, so there’s a lot of excess sodium left over. Worldwide, sodium is about 30 times cheaper than lithium, so it would be nice if we could use that as a battery cathode. The issue is that standard graphite anodes don’t hold onto sodium ions as well as they do lithium.

The KAUST team looked at a way to create a material called hard carbon to boost sodium-ion effectiveness. Producing hard carbon usually requires a complex multi-step process that involves heating samples to more than 1,800 degrees Fahrenheit (1,000 Celsius). That effectively eliminates the cost advantage of using sodium in batteries. The KAUST team managed to create something like hard carbon with relative ease using graphene and lasers.

It all starts with a piece of copper foil. The team applied a polymer layer composed of urea polymides. Researchers blasted this material with a high-intensity laser to create graphene by a process called carbonization. Regular graphene isn’t enough, though. While the laser fired, nitrogen was added to the reaction chamber. Nitrogen atoms end up integrated into the material, replacing some of the carbon atoms. In the end, the material is about 13 percent nitrogen with the remainder carbon.

Making anodes out of this “3D graphene” material offers several advantages. For one, it’s highly conductive. The larger atomic spacing makes it better for capturing sodium ions in a sodium-ion battery, too. Finally, the copper base can be used as a current collector in the battery, saving additional fabrication steps.

The researchers tested a sodium-ion battery with 3D graphene anodes, finding the system outperformed existing sodium-ion systems.

It’s still not as potent as lithium-ion, but these lower cost cells could become popular for applications where high-performance lithium-ion tech isn’t necessary. Your phone will run on lithium batteries for a bit longer.

GNT US Tenka Energy

 

Watch a YouTube Video for Nano Enabled Batteries from GNT US-Tenka Energy

Quantum dots could aid in fight against Parkinson’s


A large team of researchers with members from several institutions in the U.S., Korea and Japan has found that injecting quantum dots into the bloodstreams of mice led to a reduction in fibrils associated with Parkinson’s disease. In their paper published in the journal Nature Nanotechnology, the group describes their studies of the impact of quantum dots made of graphene on synuclein and what they found.

Quantum dots are particles that exist at the nanoscale and are made of semiconducting materials. Because they exhibit quantum properties, scientists have been conducting experiments to learn more about changes they cause to organisms when embedded in their cells. In this new effort, the researchers became interested in the idea of embedding quantum dots in synuclein cells.

Synucleins make up a group or family of proteins and are typically found in neural tissue.

One type, an alpha-synuclein, has been found to be associated with the formation of fibrils as part of the development of Parkinson’s disease. To see how such a protein might react when exposed to quantum dots, the researchers combined the two in a petri dish and watched what happened. They found that the quantum dots became bound to the protein, and in so doing, prevented it from clumping into fibrils. They also found that doing so after fibrils had already formed caused them to come apart. Impressed with their findings, the team pushed their research further.

Noting that quantum dots are small enough to pass through the blood/brain barrier, they injected quantum dots into mice with induced Parkinson’s disease and monitored them for several months. They report that after six months, the mice showed improvements in symptoms.

Read A Related Article

Quantum dots in brain could treat Parkinson’s and Alzheimer’s diseases

The researchers suggest that quantum dots might have a similar impact on multiple ailments where fibrilization occurs, noting that another team had found that injecting them into Alzheimer’s mouse models produced similar results.

It is still not known if injecting similar or different types of quantum dots into human patients might have the same effect, they note. Nor is it known if doing so would have any undesirable side effects. Still, the researchers are optimistic about the idea of using quantum dots for treatment of such diseases and because of that, have initiated plans for testing with other animals—and down the road they are looking at the possibility of conducting clinical trials in humans.

Source

A Failed Car Company Gave Rise to a Revolutionary New Battery – “Fisker’s Folly” Or “Henrik’s Home-Run”?


Fisker’s solid-state battery powers electric vehicles–and drones and flying taxis.

Since Alessandro Volta created the first true battery in 1800, improvements have been relatively incremental.

When it comes to phones and especially electric vehicles, lithium-ion batteries have resisted a slew of efforts to increase their power and decrease the time it takes to charge them.

Henrik Fisker, known for his high-end sports-car design, says his Los Angeles-based company, Fisker Inc., is on the verge of a breakthrough solid-state battery that will give EVs like his sleek new EMotion an extended range and a relatively short charging period.

Fisker Inc. founder Henrik Fisker and his new EMotion electric vehicle CREDIT: Courtesy Company

“With the size of battery pack we have made room for, we could get as much as a 750-kilometer [466-mile] range,” he says. The same battery could reduce charging time to what it currently takes to fill your car with gas.

Traditional lithium-ion batteries, like all others, use a “wet” chemistry– involving liquid or polymer electrolytes–to generate power.

But they also generate resistance when working hard, such as when they are charging or quickly discharging, which creates heat. When not controlled, that heat can become destructive, which is one reason EVs have to charge slowly.

Solid-state batteries, as the name implies, contain no liquid. Because of this, they have very low resistance, so they don’t overheat, which is one of the keys to fast recharging, says Fisker.

But their limited surface area means they have a low electrode-current density, which limits power. Practically speaking, existing solid-state batteries can’t generate enough juice to push a car. Nor do they work well in low temperatures. And they can’t be manufactured at scale.

CREDIT: Courtesy Company

Fisker’s head battery scientist, Fabio Albano, solved these problems by essentially turning a one-story solid-state battery into a multistory one.

“What our scientists have created is the three-dimensional solid-state battery, which we also call a bolt battery,” says Fisker. “They’re thicker, and have over 25 times the surface that a thin-film battery has.

That has allowed us to create enough power to move a vehicle.” The upside of 3-D is that Fisker’s solid-state battery can produce 2.5 times the energy density that lithium-ion batteries can, at perhaps a third of the cost.

Fisker was originally aiming at 2023 production, but its scientists are making such rapid advances that the company is now targeting 2020.

“We’re actually ahead of where we expected to be,” Fisker says. “We have built batteries with better results quicker than we thought.” The company is setting up a pilot plant near its headquarters.

Solid state, however, isn’t problem free. Lower resistance aids in much faster charging, up to a point. “We can create a one-minute charge up to 80 percent,” Fisker says. “It all depends on what we decide the specific performance and chemistry of the battery should be.”

If a one- or two- or five-minute charge gives a driver 250 miles and handles the daily commute, that can solve the range-anxiety issue that has held back EV sales.

Solid-state-battery technology can go well beyond cars. Think about people having a solid-state battery in their garage that could charge from the grid when demand is low, so they don’t pay for peak energy, and then transfer that energy to their car battery. It could also act as an emergency generator if their power goes down. “This is nonflammable and very light,” says Fisker. “It’s more than twice as light as existing lithium-ion batteries. It goes into drones and electric flying taxis.”

Like many designers, Fisker is a bit of dreamer. But he’s also a guy with a track record of putting dreams into motion.

Joy ride.

Henrik Fisker’s car company crashed in the Great Recession, but one of the industry’s flashiest designers quickly got in gear again. His latest piece of automotive art: the EMotion.

Fisker has never created an automobile that didn’t evoke a response. He’s one of the best-known designers in the industry, with mobile masterpieces such as the Fisker Karma, the Aston Martin DB9, and the BMW Z8. It’s only appropriate his latest vehicle has been christened the EMotion.

The curvy, carbon fiber and aluminum all-wheel-drive EV, with its too-cool butterfly doors and cat’s-eye headlights, debuted at the Consumer Electronics Show in January. It will be the first passenger-vehicle offering of the new Fisker Inc.–the previous Fisker Automotive shuttered in 2013, in the aftermath of the Great Recession. (Reborn as Karma Automotive, that company makes the Revero, based on a Fisker design.)

Fisker ran out of funding but not ideas. He quickly got the new company going and has described the EMotion as having “edgy, dramatic, and emotionally charged design/ proportions–complemented with technological innovation that moves us into the future.” The car will come equipped with a Level 4 autonomous driving system, meaning it’s one step away from being completely autonomous.

You might want to drive this one yourself, though. The EMotion sports a 575-kw/780-hp- equivalent power plant that delivers a 160-mph top speed, and goes from 0 to 60 in three seconds. The sticker price is $129,000; the company is currently taking refundable $2,000 deposits.

Though designed to hold the new solid-state battery, the EMotion that will hit the road in mid-2020 has a proprietary battery module from LG Chem that promises a range of 400 miles — Tesla Model S boasts 335. About his comeback car, Fisker says he felt free to be “radically innovative.” For a niche car maker, it might be the only way to remain competitive.