A new brain-inspired architecture could improve how computers handle data and advance AI

anewbraininsBrain-inspired computing using phase change memory. Credit: Nature Nanotechnology/IBM Research

IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. Their designs draw on concepts from the human brain and significantly outperform conventional computers in comparative studies. They report on their recent findings in the Journal of Applied Physics.

Today’s computers are built on the von Neumann architecture, developed in the 1940s. Von Neumann computing systems feature a central processor that executes logic and arithmetic, a memory unit, storage, and input and output devices. Unlike the stovepipe components in conventional computers, the authors propose that brain-inspired computers could have coexisting processing and memory units.

Abu Sebastian, an author on the paper, explained that executing certain  in the computer’s memory would increase the system’s efficiency and save energy.

“If you look at human beings, we compute with 20 to 30 watts of power, whereas AI today is based on supercomputers which run on kilowatts or megawatts of power,” Sebastian said. “In the brain, synapses are both computing and storing information. In a new architecture, going beyond von Neumann, memory has to play a more active role in computing.”

The IBM team drew on three different levels of inspiration from the brain. The first level exploits a memory ‘s state dynamics to perform computational tasks in the memory itself, similar to how the brain’s memory and processing are co-located. The second level draws on the brain’s synaptic network structures as inspiration for arrays of phase change memory (PCM) devices to accelerate training for deep neural networks. Lastly, the dynamic and stochastic nature of neurons and synapses inspired the team to create a powerful computational substrate for spiking neural networks.

Phase change memory is a nanoscale memory device built from compounds of Ge, Te and Sb sandwiched between electrodes. These compounds exhibit different electrical properties depending on their atomic arrangement. For example, in a disordered phase, these materials exhibit high resistivity, whereas in a crystalline phase they show low resistivity.

By applying electrical pulses, the researchers modulated the ratio of material in the crystalline and the amorphous phases so the phase change memory devices could support a continuum of electrical resistance or conductance. This analog storage better resembles nonbinary, biological synapses and enables more information to be stored in a single nanoscale device.

Sebastian and his IBM colleagues have encountered surprising results in their comparative studies on the efficiency of these proposed systems. “We always expected these systems to be much better than conventional computing systems in some tasks, but we were surprised how much more efficient some of these approaches were.”

Last year, they ran an unsupervised machine learning algorithm on a conventional  and a prototype computational memory platform based on  change  devices. “We could achieve 200 times faster performance in the  computing systems as opposed to conventional computing systems.” Sebastian said. “We always knew they would be efficient, but we didn’t expect them to outperform by this much.” The team continues to build prototype chips and systems based on brain-inspired concepts.

 Explore further: Novel synaptic architecture for brain inspired computing

More information: Hiroto Kase et al, Biosensor response from target molecules with inhomogeneous charge localization, Journal of Applied Physics (2018). DOI: 10.1063/1.5036538



“Not Just for Select Few” ~ Quantum Computing is Growing toward Commercialization .. Disrupting EVERYTHING


Consider three hair-pulling problems: 1 percent of the world’s energy is used every year just to produce fertilizer; solar panels aren’t powerful enough to provide all the power for most homes; investing in stocks often feels like a game of Russian roulette.

Those seemingly disparate issues can be solved by the same tool, according to some scientists: quantum computing. Quantum computers use superconducting particles to perform tasks and have long been seen as a luxury for the top academic echelon—far removed from the common individual. But that’s quickly changing.

IBM had been dabbling with commercial possibilities when last year it released Quantum Experience, a cloud-based quantum computing service researchers could use to run experiments without having to buy a quantum system. In early March, IBM took that program further and announced IBM Q, the first cloud quantum computing system for commercial use. Companies will be able to buy time on IBM’s quantum computers in New York state, though IBM has not set a release date or price, and it is expected to be financially prohibitive for smaller companies at first.

Jarrod McClean, a computing sciences fellow at Lawrence Berkeley National Laboratory, says the announcement is exciting because quantum computing wasn’t expected to hit commercial markets for decades. Last year, some experts estimated commercial experimentation could be five to 40 years away, yet here we are, and the potential applications could disrupt the way pharmaceutical companies make medicine, the way logistics companies schedule trains and the way hedge fund managers gain an edge in the stock market. “We’re seeing more application areas start to develop all the time, now that people are looking at quantum,” McClean says.

Quantum computing is as different from traditional computing as an abacus is from a MacBook. “Classical computing was [invented] in the 1940s. This is like [that creation], but even beyond it,” says Scott Crowder, IBM Systems vice president and chief technology officer of quantum computing, technical strategy and transformation. “Take everything you know about how a class of computers works and forget it.”

Besting supercomputers

Quantum computers are made up of parts called qubits, also known as quantum bits. On some problems, they leverage the strange physics of quantum mechanics to work faster than chips on a traditional computer. (Just as a plane cannot exactly compare to a race car, a classical computer will still be able to do some things better than quantum, and vice versa. They’re just different.)

Explaining how qubits work requires jumping into quantum mechanics, which doesn’t follow the same rules of physics we’re used to in our everyday lives. Quantum entanglement and quantum superposition are particularly important; they defy common sense but take place only in environments that are incredibly tiny.

Quantum Computing 0421quantum01

IBM Quantum Computing Scientists Hanhee Paik, left, and Sarah Sheldon, right, examine the hardware inside an open dilution fridge at the IBM Q Lab at IBM’s T. J. Watson Research Center in Yorktown, New York. IBM Q quantum systems and services will be delivered via the IBM Cloud platform and will be designed to tackle problems that are too complex and exponential in nature for classical computing systems to handle. One of the first and most promising applications for quantum computing will be in the area of chemistry and could lead to the discovery of new medicines and materials.CONNIE ZHOU/IBM

Quantum superposition is important because it allows the qubit to do two things at once. Technically, it allows the qubit to be two things at once. While traditional computers put bits in 0 and 1 configurations to calculate steps, a qubit can be a 0 and a 1 at the same time. Quantum entanglement, another purely quantum property, takes the possibilities a step further by intertwining the characteristics of two different qubits, allowing for even more calculations. Calculations that would take longer than a human’s life span to work out on a classic computer can be completed in a matter of days or hours.quantum Computing II images

Eventually, quantum computing could outperform the world’s fastest supercomputer—and then all computers ever made, combined. We aren’t there yet, but at 50 qubits, universal quantum computing would reach that inflection point and be able to solve problems existing computers can’t handle, says Jerry Chow, a member of IBM’s experimental quantum computing department. He added that IBM plans to build and distribute a 50-qubit system “in the next few years.” Google aims to complete a 49-qubit system by the end of 2017.

Some experts aren’t convinced IBM’s move into the commercial market is significant. Yoshihisa Yamamoto, a Stanford University physics professor, says, “I expect the IBM quantum computer has a long way to go before it is commercialized to change our everyday life.”

Caltech assistant professor of computing and mathematical sciences Thomas Vidick says IBM’s commercialization of quantum computing feels “a bit premature” and estimates it will still be 10 to 20 years before commercial applications are mainstream. “The point is that quantum hardware hasn’t reached maturity yet,” he explains. “These are large machines, but they are hard to control. There is a big overhead in the transformation that maps the problem you want to solve to a problem that the machine can solve, one that fits its architecture.”

Despite the skepticism, many researchers are pumped. “While the current systems aren’t likely to solve a computational problem that regular computers can’t already solve, preparing the software layer in advance will help us hit the ground running when systems large enough to be useful become available,” says Michele Mosca, co-founder of the Institute for Quantum Computing at Ontario’s University of Waterloo. “Everyday life will start to get affected once larger-scale quantum computers are built and they are used to solve important design and optimization problems.”

quantum computing III imagesA company called D-Wave Systems already sells 2,000-qubit systems, but its systems are different from IBM’s and other forms of universal quantum computers, so many experts don’t consider their development to have reached that quantum finish line. D-Wave Systems’s computers are a type of quantum computer called quantum annealers, and they are limited because they can be used only on optimization problems. There is a roaring scientific debate about whether quantum annealers could eventually outpace traditional supercomputers, but regardless, this type of quantum computer is really good at one niche problem and can’t expand beyond that right now.

Practical Applications

What problems could be so complicated they would require a quantum computer? Take fertilizer production, McClean says. The power-hungry process to make mass-produced fertilizer accounts for 1 percent to 2 percent of the world’s energy use per year. But there’s a type of cyanobacteria that uses an enzyme to do nitrogen fixation at room temperature, which means it uses energy far more efficiently than industrial methods. “It’s been too challenging for classical systems to date,” McClean says, but he notes that quantum computers would probably be able to reveal the enzyme’s secrets so researchers could re-create the process synthetically. “It’s such an interesting problem from a point of view of how nature is able to do this particular type of catalysis,” he adds.

Pharmaceutical science could also benefit. One of the limitations to developing better, cheaper drugs is problems that arise when dealing with electronic structures, McClean says. Except with the simplest structures, like hydrogen-based molecules, understanding atomic and subatomic motion requires running computer simulations. But even that breaks down with more complex molecules. “You don’t even ask those questions on a classical computer because you know you’re going to get it wrong,” Crowder says.

The ability to predict how molecules react with other drugs, and the efficacy of certain catalysts in drug development, could drastically speed up the pace of pharmaceutical development and, ideally, lower prices, McClean says.

Finance is also plagued by complicated problems with multiple moving parts, says Marcos López de Prado, a research fellow in the Computational Research Department at Lawrence Berkeley National Laboratory. Creating a dynamic investment portfolio that can adjust to the markets with artificial intelligence, or running simulations with multiple variables, would be ideal, but current computers aren’t advanced enough to make this method possible. “The problem is that a portfolio that is optimal today may not be optimal tomorrow,” López de Prado says, “and the rebalance between the two can be so costly as to defeat its purpose.”

Quantum computing could figure out the optimal way to rebalance portfolios day by day (or minute by minute) since that “will require a computing power beyond the current potential of digital computers,” says López de Prado, who is also a senior managing director at Guggenheim Partners. “Instead of listening to gurus or watching TV shows with Wall Street connections, we could finally get the tools needed to replace guesswork with science.”

While business applications within quantum computing are mostly hopeful theories, there’s one area where experts agree quantum could be valuable: optimization. Using quantum computing to create a program that “thinks” through how to make business operations faster, smarter and cheaper could revolutionize countless industries, López de Prado says.

For example, quantum computers could be used to organize delivery truck routes so holiday gifts arrive faster during the rush before Christmas. They could take thousands of self-driving cars and organize them on the highway so all the drivers get to their destination via the fastest route. They could create automated translating software so international businesses don’t have to bother with delays caused from translating emails. “Optimization is just a generic hammer they can use on all these nails,” McClean says.

One day, quantum might even be used for nationwide problems, like optimizing the entire U.S. economy or organizing a national power grid.

Just as computers presented a huge advantage to the handful of companies that could afford them when they first came on the commercial market, it’s possible that a few companies might gain a tactical advantage by using quantum computing now. For example, if only a few investors use quantum computing to balance portfolios, the rest of the market will probably lose money. “But what happens when quantum computing goes mainstream?” asks López de Prado. “That tactical disadvantage disappears. Instead, everyone will be able to make better investment decisions. People will rely on science rather than stories.”

Link to Original Article in Newsweek Tech & Science

IBM Scientists: Quantum transport goes ballistic ~ ‘Q & A’ with NIST


IBM QT download

IBM scientists have shot an electron through an III-V semiconductor nanowire integrated on silicon for the first time (Nano Letters, “Ballistic One-Dimensional InAs Nanowire Cross-Junction Interconnects”). This achievement will form the basis for sophisticated quantum wire devices for future integrated circuits used in advanced powerful computational systems. (A Q&A with NIST)

NIST: The title of your paper is Ballistic one-dimensional InAs nanowire cross-junction interconnects. When I read “ballistic” rather large missiles come to mind, but here you are doing this at the nanoscale. Can you talk about the challenges this presents?
Johannes Gooth (JG): Yes, this is very similar, but of course at a much different scale. Electrons are fired from one contact electrode and fly through the nanowire without being scattered until they hit the opposed electrode. The nanowire acts as a perfect guide for electrons, such that the full quantum information of this electron (energy, momentum, spin) can be transferred without losses. 

Quantum Trnp goes Ballistic id46328_1IBM scientist Johannes Gooth is focused on nanoscale electronics and quantum physics.


We can now do this in cross-junctions, which allows us to build up electron pipe networks, where quantum information can perfectly be transmitted. The challenge is to fabricate a geometrically very well defined material with no scatterers inside on the nano scale. The template-assisted selective epitaxy or TASE process, which was developed here at the IBM Zurich Lab by my colleagues, makes this possible for the first time.
NIST: How does this research compare to other activities underway elsewhere?
JG: Most importantly, compared to optical and superconducting quantum applications the technique is scalable and compatible with standard electronics and CMOS processes.
NIST: What role do you see for quantum transport as we look to build a universal quantum computer?
JG: I see quantum transport as an essential piece. If you want to exercise the full power of quantum information technology, you need to connect everything ballistic: a quantum system that is fully ballistically (quantum) connected has an exponentially larger computational state space compared to classically connected systems.
Also, as stated above, the electronics are scalable. Moreover, combining our nanowire structures with superconductors allows for topological protected quantum computing, which enables fault tolerant computation. These are major advantages compared to other techniques.IBM QT II 51XVJRfhbNL._SX348_BO1,204,203,200_
NIST: How easily can this be manufactured using existing processes and what’s the next step?
JG: This is a major advantage of our technique because our devices are fully integrated into existing CMOS processes and technology.
NIST: What’s next for your research?
JG: The next steps will be the functionalization of the crosses, by means of attaching electronic quantum computational parts. We will start to build superconducting/nanowire hybrid devices for Majorana braiding, and attach quantum dots.
Source: NIST


IBM creates world’s first artificial phase-change neurons – They behave like biological neurons, including low power usage and dense scaling.


IBM 6 artificial-neurons-illustration-980x691

IBM Research in Zurich has created the world’s first artificial nanoscale stochastic phase-change neurons. IBM has already created a population of 500 of these artificial neurons and used them to process a signal in a brain-like (neuromorphic) way.

This breakthrough is particularly notable because the phase-change neurons are fashioned out of well-understood materials that can scale down to a few nanometres, and because they are capable of firing at high speed but with low energy requirements. Also important is the neurons’ stochasticity—that is, their ability to always produce slightly different, random results, like biological neurons.

Enough fluff—let’s talk about how these phase-change neurons are actually constructed. At this point, it might help if you look at the first diagram in the gallery.

A diagram showing a single phase-change neuron. IBM

IBM 1 stochastic-neurons-diagram




The dynamics of a phase-change neuron. Bottom left shows the gradual crystallisation of the GST – when there’s enough crystalline (shown in blue), the neuron fires. IBM

IBM 2 stochastic-neurons-firing

On the left you can see a single phase-change neuron’s behaviour initially – in the middle, you can see how it behaves after a few seconds. IBM

IBM 3 stochastic-neuron-correlations-learning



Like a biological neuron, IBM’s artificial neuron has inputs (dendrites), a neuronal membrane (lipid bilayer) around the spike generator (soma, nucleus), and an output (axon). There’s also a back-propagation link from the spike generator back to the inputs, to reinforce the strength of some input spikes.



The key difference is in the neuronal membrane. In a real neuron, this would be a lipid bilayer, which essentially acts as both a resistor and a capacitor: it resists conductance, but eventually, with enough electricity along the input dendrite, it builds up enough potential that its own spike of electricity is produced—which then flows along the axons to other neurons—and so on and on.

In IBM’s neuron, the membrane is replaced with a small square of germanium-antimony-tellurium (GeSbTe or GST). GST, which happens to be the main active ingredient in rewritable optical discs, is a phase-change material. This means it can happily exist in two different phases (in this case crystalline and amorphous), and easily switch between the two, usually by applying heat (by way of laser or electricity). A phase-change material has very different physical properties depending on which phase it’s in: in the case of GST, its amorphous phase is an electrical insulator, while the crystalline phase conducts.


FURTHER READING: Intel and Micron unveil 3D XPoint, a brand new memory technology Click On Link


With the artificial neurons, the square of GST begins life in its amorphous phase. Then, as spikes arrive from the inputs, the GST slowly begins to crystallise. Eventually, the GST crystallises enough that it becomes conductive—and voilà, electricity flows across the membrane and creates a spike. After an arbitrary refractory period (a resting period where something isn’t responsive to stimuli), the GST is reset back to its amorphous phase and the process begins again.

“Stochastic” refers to a system where there is an amount of randomness in the results. Biological neurons are stochastic due to a range of different noises (ionic conductance, thermal, background). IBM says that its artificial neurons exhibit similar stochastic behaviour because the amorphous state of each GST cell is always slightly different after each reset, which in turn causes the crystallisation process to be different. Thus, the engineers never quite know exactly when each artificial neuron will fire.


IBM 4 28351591461_1984c54322_o-980x640
Enlarge / A photo of a wafer full of phase-change devices (the silver squares). The probe needles are required to make the actual thing work: this is a prototype chip without the usual traces/pins that would connect it to a circuit board.

Phew, you made it. Now what?


FURTHER READING: IBM’s brain-inspired chip finds a home at Livermore National Lab.  Click On Link

IBM 5 multiple-stochastic-phase-change-neurons-300x133

What IBM’s multi-artificial-neuron computer looks like.
Enlarge / What IBM’s multi-artificial-neuron computer looks like.


There seem to be two main takeaways here.

First, the artificial neurons are made from well-understood materials that have good performance characteristics, last a long time (trillions of switching cycles), and can be fabricated/integrated on leading-edge nodes (the chip pictured above was fabbed at 90nm, but the research paper mentions the possibility of 14nm). The phase-change devices presented in this research are already pretty small—squares that are about 100 nanometres across.
Second, these phase-change neurons are the closest we’ve come to creating artificial devices that behave like biological neurons, perhaps leading us towards efficient, massively parallel computer designs that apply neuromorphic approaches to decision-making and processing sensory information. IBM says that their new work is complementary to research being carried out into memristor-based synapses, too.

So far, IBM has built 10×10 crossbar arrays of neurons, connected five of those arrays together to create neuronal populations of up to 500 neurons, and then processed broadband signals in a novel, brain-like way. (In technical terms, the neurons showed the same “population coding” that emerges in biological neurons, and the signal processing circumvented the Nyquist-Shannon sampling theorem).

There’s no reason to stop there, though. Now it’s time put thousands of these phase-change neurons onto a single chip—and then the difficult bit: writing some software that actually makes use of the chip’s neuromorphosity.

Nature Nanotechnology, 2016. DOI: 10.1038/nnano.2016.70 (About DOIs).

Silicon Photonics: The next step to a High-Bandwidth Future?

Silicon Photonics id39403The computing and telecommunications industries have ambitious plans for the future: Systems that will store information in the cloud, analyze enormous amounts of data, and think more like a brain than a standard computer.

Such systems are already being developed, and scientists at IBM Research have now demonstrated what may be an important step toward commercializing this next generation of computing technology. They established a method to integrate silicon photonic chips with the processor in the same package, avoiding the need for transceiver assemblies.

CMOS silicon photonics chip
CMOS silicon photonics chip.
The new technique, which will be presented 25 March at this year’s OFC Conference and Exposition in Los Angeles, California, USA, should lower the cost and increase the performance, energy efficiency and size of future data centers, supercomputers and cloud systems.
Photonic devices, which use photons instead of electrons to transport and manipulate information, offer many advantages compared to traditional electronic links found in today’s computers. Optical links can transmit more information over larger distances and are more energy efficient than copper-based links. To optimally benefit from this technology, a tight integration of the electrical logic and optical transmission functions is required. The optical chip needs to be as close to the electrical chip as possible to minimize the distance of electrical connection between them. This can only be accomplished if they are packaged together.
“IBM has been a pioneer in the area of CMOS integrated silicon photonics for more than 12 years, a technology that integrates functions for optical communications on a silicon chip,” said Bert Offrein, manager of the photonics group at IBM Research – Zurich. “In addition to the silicon technology advancements at the chip-level, novel system-level integration concepts are also required to fully profit from the new capabilities silicon photonics will bring,” he continued.
Optical interconnect technology is currently incorporated into data centers by attaching discrete transceivers or active optical cables, which come in pre-assembled building blocks. The pre-packaged transceivers are large and expensive, limiting their large-scale use, Offrein said. Furthermore, such transceivers are mounted at the edge of the board, resulting in a large distance between the processor chip and the optical components.
Offrein and his IBM colleagues from Europe, the United States and Japan instead proposed an integration scheme in which the silicon photonic chips are treated similarly to ordinary silicon processor chips and are directly attached to the processor package without pre-assembling them into standard transceiver housings. This improves the performance and power efficiency of the optical interconnects while reducing the cost of assembly. Challenges arise because alignment tolerances in photonics are critical (sub-micron range) and optical interfaces are sensitive to debris and imperfections, thus requiring the best in packaging technology.
IBM scientist Roger Dangel holds a thin film polymer waveguide
IBM scientist Roger Dangel holds a thin film polymer waveguide. IBM is experimenting with waveguides as a way to integrate silicon photonic chips into data center systems.
The team demonstrated efficient optical coupling of an array of silicon waveguides to a substrate containing an array of polymer waveguides. The significant size difference between the silicon waveguides and the polymer waveguides originally presented a major challenge. The researchers overcame this obstacle by gradually tapering the silicon waveguide, leading to an efficient transfer of the optical signal to the polymer waveguide.
The method is scalable and enables the simultaneous interfacing of many optical connections between a silicon photonic chip and the system. The optical coupling is also wavelength and polarization insensitive and tolerant to alignment offsets of a few micrometers, Offrein said.
“This integration scheme has the potential to massively reduce the cost of applying silicon photonics optical interconnects in computing systems,” Offrein said. Cheaper photonic technology enables its deployment at a large scale, which will lead to computing systems that can process more information at higher performance levels and with better energy efficiency, he explained.
“Such systems will be key for future applications in the field of cloud-computing, big data, analytics and cognitive computing. In addition, it will enable novel architectures requiring high communication bandwidth, as for example in disaggregated systems,” Offrein said.
About the presentation
Presentation W3A.4, titled “Silicon Photonics for the Data Center,” will take place Wednesday 25 March at 14:00 PDT in room 402AB at the Los Angeles Convention Center.
Source: IBM

Supercomputer Speed from a Tiny “Chip” that Mimics the Human Brain

IBM Super Chip hfjtdfjd
IBM’s new neurosynaptic processor intergrates 1 million neurons and 256 million (414) synapses on a single chip. Credit: IBMResearchers Thursday unveiled a powerful new postage-stamp size chip delivering supercomputer performance using a process that mimics the human brain.

The so-called “neurosynaptic” is a breakthrough that opens a wide new range of computing possibilities from self-driving cars to that can installed on a smartphone, the scientists say.

The researchers from IBM, Cornell Tech and collaborators from around the world said they took an entirely new approach in design compared with previous computer architecture, moving toward a system called “cognitive computing.”

“We have taken inspiration from the cerebral cortex to design this chip,” said IBM chief scientist for brain-inspired computing, Dharmendra Modha, referring to the command center of the brain.

He said existing computers trace their lineage back to machines from the 1940s which are essentially “sequential number-crunching calculators” that perform mathematical or “left brain” tasks but little else.

The new chip dubbed “TrueNorth” works to mimic the “right brain” functions of sensory processing—responding to sights, smells and information from the environment to “learn” to respond in different situations, Modha said.

It accomplishes this task by using a huge network of “neurons” and “synapses,” similar to how the human brain functions by using information gathered from the body’s sensory organs.

The researchers designed TrueNorth with one million programmable neurons and 256 million programmable synapses, on a chip with 4,096 cores and 5.4 billion transistors.

A key to the performance is the extremely low energy use on the new chip, which runs on the equivalent energy of a hearing-aid battery. This can allow a chip installed in a car or smartphone to perform supercomputer calculations in without connecting to the cloud or other network.

Sensor becomes the computer


Infographic: A brain-inspired chip to transform mobility and Internet of Things through sensory perception. Credit: IBM 

“You could have better sensory processors without the connection to Wi-Fi or the cloud.

This would allow a self-driving vehicle, for example, to detect problems and deal with them even if its data connection is broken.

“It can see an accident about to happen,” Modha said.

Similarly, a mobile phone can take smells or visual information and interpret them in real time, without the need for a network connection.

“After years of collaboration with IBM, we are now a step closer to building a computer similar to our brain,” said Rajit Manohar, a researcher at Cornell Tech, a graduate school of Cornell University.

The project funded by the US Defense Advanced Research Projects Agency (DARPA) published its research in a cover article on the August 8 edition of the journal Science.

The researchers say TrueNorth in some ways outperforms today’s supercomputers although a direct comparison is not possible because they operate differently.

But they wrote that TrueNorth can deliver from 46 billion to 400 billion “synaptic” calculations per second per watt of energy. That compares with the most energy-efficient supercomputer which delivers 4.5 billion “floating point” calculations per second and per watt.

The chip was fabricated using Samsung’s 28-nanometer process technology.

“It is an astonishing achievement to leverage a process traditionally used for commercially available, low-power mobile devices to deliver a chip that emulates the by processing extreme amounts of sensory information with very little power,” said Shawn Han of Samsung Electronics, in a statement.

“This is a huge architectural breakthrough that is essential as the industry moves toward the next-generation cloud and big-data processing.”

Modha said the researchers have produced only the chip and that it could be years before commercial applications become available.

But he said it “has the potential to transform society” with a new generation of computing technology. And he noted that hybrid computers may be able to one day combine the “left brain” machines with the new “right brain” devices for even better performance.

Explore further: IBM to spend $3 bn aiming for computer chip breakthrough

More information: “A million spiking-neuron integrated circuit with a scalable communication network and interface,” by P.A. Merolla et al. Science, 2014. www.sciencemag.org/lookup/doi/… 1126/science.1254642

Nanotech “Scaling Up” Technologies Progress

NanotechScaling Up” Technologies  Progress




A year ago I asked: why Nanotech was not yet the projected world changing technology?   I answered: ‘there are deficiencies in the scale up technology and business models so far employed.”  Interestingly this year we seem to be making progress in that key deficiency … reliably scaling up nanotech to useful sizes. What do we know now that we didn’t understand 14 years ago when the first NNI was passed or even last year?

After 14 years of $1 Billion+/year in US Government investment and over twice as much by the private sector, we do not … I repeat… we do not have a “killer” product (like a killer app) based on Nanotechnology or incorporating significant nanotech that couldn’t have been achieved in a more conventional way.  No uniquely nanotechnology company business model has been created on which to build economic wealth.  After more than a decade, one should be apparent.

The promise of nanotech was always to create or assemble what nature didn’t supply us for the benefit of civilization.  That process… new stuff for the benefit of mankind … has barely occurred… despite billions of dollars already spent. 

That seems to be changing.  At the risk of hubric punditry, the next few years will see such technological products, economic changes and a business model with which investors will feel confident.   Maybe these “next few years” nanotech developments finally will be life revolutionizing – maybe even incredibly lucrative for perceptive founders and investors.  It seems that the dream of riches from nanotech … the payoff for all this extraordinary investment, is still alive… only delayed.

It is a maxim of those of us who teach technological applications, change and innovation to grad students and seasoned executives that it takes more than a decade (maybe two) for fundamental breakthroughs in core technology to appear in revolutionary and practical ways within the mass of the developed world economy. Nanotech seems to be following that pattern.

Over the last decade… and accelerating lately … worldwide we’ve developed amazing nanoscience.  We’ve put together in the lab and in the university unique compounds that are reduced in size or created from nanotechnology building blocks to perform functions with incremental characteristics, sensitivities and accuracies before unavailable.  We’ve uncovered ways to protect or change coatings on macro sized manufactured things to improve performances.  Using nanoscience techniques, we’ve begun to re orient medicine toward diagnostics and preventive medicine as opposed to the symptoms treating medicine of the past century.  Nanotech has made ‘green’ possible.  Nanotech has made 3 D printing of materials possible. New nanosize geometries and Nano containing liquids are changing the ways in which energy is stored with huge increases in energy storage densities at ever reducing costs/kw.  Materials have been modified at the Nano level to produce amazingly useful electronic and physical product improvements.  All this is good but not sufficient.

One of the difficulties encountered has been to find a way to scale up wondrous single developments to useful macro size.  Nanotech just doesn’t scale well, making the scaling up of breakthroughs in nanoscience to macro (usable) sizes almost as difficult and expensive as the cost of the original nanoscience or nanotech development breakthroughs.  Nano-pros have failed to find ways to reproduce nanoscience breakthroughs reliably with repeated high technological performance and continuous integrity to macro size manufacturing specs.  Truthfully, there hasn’t been enough investment money devoted to this part of the nanotech development story.  Moreover, it’s not sufficient just to scale the Nano part of a development.  Economically, the entire system containing the nanotech breakthrough has to be scaled…  and technically, systems scaling is very difficult.  It’s been an expensive and hard lesson to learn.

The mass of much lionized nanotubes, both single and multiple wall, form in a spaghetti-like mixed breed mess.  This “mess’ is useless product wise.  The nanotubes have to be separated by type, separated from each other, and then oriented for use in a higher-level system. Not only is this process difficult to accomplish reliably but it also is expensive changing some of the economic promise of nanotube applications. Nanotubes are projected as the ‘next connectors’ in semiconductors.  IBM literally has to cut grooves in substrates to orient their nanotube connectors properly for testing and for prototype use.  It admits the grooves are not a solution and are searching for other ways to build nanotube-connected chips for use in its semiconductor applications.

Another area of promise in trouble is the multifunctional-targeted nanoparticle for use in anti cancer and anti human health condition solutions.  Others and I have touted these developments as opening new thresholds in medical treatment with reduced collateral damage.  In animal models and in vitro, the particles are amazingly effective … targeting and eliminating the bad stuff while inducing little ‘collateral’ damage.

However, what occurs when such ‘breakthroughs’ are introduced in the human body to improve our health as programmed is not identical.   In vivo, multifunctional particles tend to clump, not be evenly distributed and tend not to target only the sick cells… and far more than in the models… attach to normal cells with adverse consequences.  These are some of the reasons you don’t find the FDA approving many of the numerous approaches to multifunctional medical particles for trials in human use.  They don’t work as programmed and those who have invested in the promise of the original tests have taken large financial baths … sometime losing their entire investment as the company goes bankrupt.  It is far too long after the original articles in 2005 for there not to be an entire slew of these particles as products making us feel better.
Now, a more difficult issue: The economics of nanotechnology.  I have written about this subject here repeatedly.  Nanotech occurs at the lowest level of the value chain.  There is no economic margin inherent in the development of a nanotech breakthrough.  All nanotech breakthroughs have to be incorporated in a product or system upstream in the value chain or no economic reason for further development will manifest. 

Sometimes, companies have to find ‘cost avoidance’ reasons for developing a nanotech-using product.  A specific example is a company called Genomic Research… which developed a product that isolated a family of genes in a certain group of post operation breast cancer patients who could avoid expensive radiotherapy because the data showed that the radiotherapy was ineffective in preventing recurrence of the cancer.  The savings in insurance costs were successfully used to justify the economics of the Genomic Research DNA tests so that Genomic Research has current sales in excess of $400,000,000.  A true success story.  The lesson here is that with a nanotech-based product, the economic justification can come from outside the nanotech industry … few and far between.

What seems to be changing?  The original promise of nanotech called for self reproducing compounds that would automatically scale up and because they were self cloning insure that quality remained constant during the scale up and ultimate manufacture.  Recently, researchers of scale up processes have shown that combining a new compound with certain forms of DNA allow not only for movement of the compound, but also for self reproduction… so the process of Nano self reproduction is very close to realization.

The other issue… separation of similar Nano forms, that too seems to be on the verge of solution. The solution seems to be to separate the compounds in solution.  Placing the mix of nanoparticles into a properly ionized or PH’d or constructed solution for that compound has recently shown promise to purify and separate the mess that comes out of Nano manufacture.

A last example is what is happening with modified bacteria.  Making Nano stuff in bacterial soup with zillions of mini factories seems an ideal scale up solution.  Until recently, organic only reproduced organic.  But Langer’s lab at MIT spun off a company that genetically modified bacteria to produce inorganic stuff in large quantities in soups containing raw elements.  I’ll cover this company in a later article.

In conclusion there are now production system semi breakthroughs that hold promise to solve the scale up dilemma.  I’ll detail those next month.

Alan B. Shalleck
NanoClarity LLC


Better Batteries May Spark New Consumer Devices, Cars

QDOTS imagesCAKXSY1K 8BASF (BASFY), Toyota  (TM) and IBM  (IBM) are among companies placing sizable  early bets on next-generation batteries that could better power things big or  small, such as electric cars or maybe wristwatch computers, according to Lux  Research analyst Cosmin Laslau. But not for a while.

First the new batteries might get a real-world test powering unmanned aerial  vehicles — drones and microvehicles — for the military, he says, as it’s a case  where the customer might be willing to pay double for a 10% improvement in power  for the weight. Several new technologies could deliver up to 10 times more  energy than today’s batteries, Lux Research says in a new report.

The current Lithium-ion (Li-ion) battery market is worth north of $10  billion, Laslau says. But for now applications are limited at the small end by  how much power output the batteries have for their size — think of how much  space the battery of an Apple (AAPL)  iPhone takes up. On the big end of applications are electric cars, where the  cost of a large-enough battery to provide a useful number of miles in driving  range is a limiting factor. Size is an issue there, too.

“When you get to large size like say a Tesla (TSLA)  electric vehicle, in order to get the range people want … it might cost  $30,000 for the battery alone,” Laslau said.

The report, “Beyond Lithium-Ion: A Roadmap for Next-Generation Batteries,”  that Laslau put together with two contributors sees military users as the entry  point for next-gen batteries around 2020 and consumer electronics adopting new  solid-state batteries by 2030, but it’s a hard sell for next-gen batteries in  transportation to unseat Li-ion batteries. Meanwhile, research and other kinds  of gains are expected to continue improving those and push down costs.

The next-gen battery types that could be Li-ion alternatives go by names such as Lithium-air, Lithium-sulfur, Solid-state (ceramic or polymer) and Zinc-air. They have different safety and power profiles, with solid-state having a safety edge. Several startups, such as PolyPlus, Sion Power and Oxis Energy, are working on next-gen types, and Laslau says one hard part is translating them from prototype to production. BASF has put $50 million into Sion, he adds.

The report notes that giants such as IBM, Bosch, Toyota and BMW are active in  battery research — and the last two recently partnered on it.

Some government-backed battery startups “have failed spectacularly,” Laslau  said, with A123 Systems the prime example.

“Now the U.S. has changed tack and put $120 million into Argonne National  Lab’s JCESR, the Joint Center for Energy Storage Research,” he said. It will  focus on fundamental R&D rather than making bets on startups.

“We think this is a very promising development,” Laslau said, noting that the  lab is also partnering “with really well-established companies like Johnson  Controls (JCI) that have the expertise to  mass-produce any prototypes.” Other partners include Dow  Chemical (DOW) and Applied  Materials (AMAT).

Read More At Investor’s Business Daily: http://news.investors.com/technology/032013-648660-next-generation-batteries-might-power-smartwatches-electric-cars.htm#ixzz2QpxswlBF Follow us: @IBDinvestors on Twitter | InvestorsBusinessDaily on Facebook

Flexible electronics could transform the way we make and use electronic devices

Flexible electronics open the door to foldaway smartphone displays, solar cells on a roll of plastic and advanced medical devices — if we can figure out how to make them.

QDOTS imagesCAKXSY1K 8Nearly everyone knows what the inside of a computer or a mobile phone looks like: A stiff circuit board, usually green, crammed with chips, resistors, capacitors and sockets, interconnected by a suburban sprawl of printed wiring.

But what if our printed circuit board was not stiff, but flexible enough to bend or even fold?

It may sound like an interesting laboratory curiosity, but not to Enrique Gomez, an assistant professor of chemical engineering at Penn State. “It could transform the way we make and use electronic devices,” he says.

Gomez is one of many scientists investigating flexible electronics at the University’s Materials Research Institute. Others are doing the same at universities and corporations around the world.

Flexible electronics are in vogue for two reasons.

First, they promise an entirely new design tool. Imagine, for example, tiny smartphones that wrap around our wrists, and flexible displays that fold out as large as a television. Or photovoltaic cells and reconfigurable antennas that conform to the roofs and trunks of our cars. Or flexible implants that can monitor and treat cancer or help paraplegics walk again.

Penn State’s interest in flexible and printed electronics is not just theoretical. In October 2011, the University began a multi-year research project with Dow Chemical Corporation. Learn more about the partnership.

Second, flexible electronics might cost less to make. Conventional semiconductors require complex processes and multi-billion dollar foundries. Researchers hope to print flexible electronics on plastic film the same way we print ink on newspapers.

“If we could make flexible electronics cheap enough, you could have throwaway electronics. You could wear your phone on your clothing, or run a bioassay to assess your health simply by wiping your nose with a tissue,” Gomez says.

Before any of this happens, though, researchers have to rethink what they know about electronics.

Victim of Success

That means understanding why conventional electronics are victims of their own success, says Tom Jackson, Kirby Chair Professor of Electrical Engineering. Jackson should know, because he helped make them successful. Before joining Penn State in 1992, he worked on IBM‘s industry-leading laptop displays. At Penn State, he pioneered the use of organic molecules to make transistors and electronic devices.

Modern silicon processors integrate billions of transistors, the semiconductor version of an electrical switch on tiny slivers of crystalline silicon.

Squeezing so many transistors in a common location enables them to handle complex problems. As they shrink in size, not only can we fit more transistors on a chip, but the chip gets less expensive to manufacture.

“It is hard to overstate how important this has been,” Jackson explains.

“Remember when we paid for long-distance phone calls by the minute? High-speed switching drove those costs way down. In some cases, we can think of computation as free. You can buy an inexpensive calculator at a store for $1, and the chip doesn’t even dominate the cost. The power you get is amazing.”

That, says Jackson, is the problem. Semiconductor processors are so good and so cheap, we fall into the trap of thinking they can solve every problem.

Sometimes, it takes more flexibility to succeed.


Consider surgery to remove a tumor from a patient’s liver. Even after following up with radiation or chemotherapy, the surgeon is never sure if the treatment was successful.

“But suppose I could apply a flexible circuit to the liver and image the tissue. If we see a new malignancy, it could release a drug directly onto that spot, or heat up a section of the circuit to kill the malignant cells. And when we were done, the body would resorb the material,” Jackson says.

“What I want,” he says, “is something that matches the flexibility and thermal conductivity of the body.” Conventional silicon technology is too stiff and thermally conductive to work.

Similarly, large, flexible sensors could monitor vibrations on a bridge or windmill blade and warn when they needed maintenance.

“If you want to spread 100 or 1,000 sensors over a large area, you have to ask whether you want to place all the chips you need to do that, or use low-cost flexible electronics that I can apply as a single printed sheet,” Jackson says.

None of the flexible electronics now under development would match the billions of transistors that now fit on silicon chips, or their billions of on-off cycles per second. They would not have to. After all, even today’s fastest televisions refresh their displays only 240 times per second. That is more than fast enough to image cancer in the body, reconfigure an antenna, or assess the stability of a bridge.

So how, exactly, do we make flexible electronics, and what kind of materials do we make them from?


To explain what draws researchers to printing flexible electronics, Jackson walks through the production of flat panel displays in a $2-3 billion factory.

The process starts with a 100-square-foot plate of glass. To apply wires, the factory coats the entire plate with metal, then covers it with a photosensitive material called a resist. An extremely bright light flashes the pattern of the wires onto the coating, hardening the resist. In a series of steps, the factory removes the unhardened resist and metal under it. Then, in another series of steps, it removes the hardened resist, leaving behind the patterned metal wires.

Factories repeat some variant of this process four or five times as they add light-emitting diodes (LEDs), transistors and other components. With each step, they coat the entire plate and wash away unused materials. While the cost of a display is 70 percent that of a finished device, most of those materials get thrown away.

None of the flexible electronics now under development would match the billions of transistors that now fit on silicon chips, or their billions of on-off cycles per second. They would not have to.

“So it’s worth thinking about whether we can do this by putting materials where we need them, and reduce the cost of chemicals and disposal. It is a really simple idea and really hard to do,” Jackson says.

An ideal way to do that, most researchers agree, would be to print the electronics on long plastic sheets as they move through a factory. A printer would do this by applying different inks onto the film. As the inks dried, they would turn into wires, transistors, capacitors, LEDs and all the other things needed to make displays and circuits.

That, at least, is the theory. The problem, as anyone who ever looked at a blurry newspaper photograph knows, is that printing is not always precise. Poor alignment would scuttle any electronic device. Some workarounds include vaporizing or energetically blasting materials onto a flexible sheet, though this complicates processing.

And then, of course, there are the materials. Can we print them? How do we form the precise structures we need? And how do we do dry and process them at temperatures low enough to keep from melting the plastic film?

Material World

Fortunately, there are many possible materials from which to choose. These range from organic materials, like polymers and small carbon-based molecules, to metals and even ceramics.

At first glance, flexible ceramics seem like a stretch. Metals bend, and researchers can often apply them as zigzags so they deform more easily.

Try flexing a thick ceramic, though, and it cracks. Yet that has not deterred Susan Trolier-McKinstry, a professor of ceramic science and engineering and director of Penn State’s W.M. Keck Smart Materials Integration Laboratory.

Ceramics, she explains, are critical ingredients in capacitors, which can be used to regulate voltage in electronic circuits. In many applications, transistors use capacitors to provide instantaneous power rather than waiting for power from a distant source.

Industry makes capacitors from ultrafine powders. The tiniest layer thicknesses are 500 nanometers, 40 times smaller than a decade or two ago. Even so, there is scant room for them on today’s overcrowded circuit boards, especially in smartphones. Furthermore, there is a question about how long industry can continue to scale the thickness in multilayer ceramic capacitors.

Trolier-McKinstry thinks she can deposit smaller capacitors directly onto flexible sheets of plastic, and sandwich these in flexible circuit board. That way, the capacitors do not hog valuable surface area.

One approach is to deposit a precursor to the capacitor from a solution onto a plastic film and spot heat each capacitor with a laser to remove the organics and crystallize the ceramic into a capacitor. Another approach is to use a high-energy laser beam to sand blast molecules off a solid ceramic and onto a plastic substrate.

As long as she can keep capacitor thicknesses small, Trolier-McKinstry need not worry too much about capacitor flexibility. Previous researchers have demonstrated that it is possible to bend some electroceramic films around the radius of a Sharpie pen without damage.

Of course, not every element placed on a flexible substrate will be small. So what happens if your transistors need to bend?

One way to solve that problem is to make electronics from organic materials like plastics. These are the ultimate flexible materials. While most organics are insulators, a few are conductive.

“Organic molecules have tremendous chemical versatility,” Gomez explains. “My group’s goal is to turn these molecules into transistors and photovoltaic cells.” Easier said than done. The almost infinite number of possibilities available in organic chemistry, he says, make it challenging to find the right combination of structure, properties and function to create an effective device.

Molecules may not be picky about their neighbors, but they still need to form the right type of structures to act as switches or turn light into electricity. Gomez attacks the problem by using a technique called self-assembly. It starts with block copolymers, combinations of two molecules with different properties bound together in the middle.

Trolier-McKinstry thinks she can deposit smaller capacitors directly onto flexible sheets of plastic, and sandwich these in flexible circuit boards. That way, the capacitors do not hog valuable surface area.

“Think of them as a dog and a cat tied together by their tails,” Gomez explains. “Ordinarily, they want to run away from each other, but now they can’t. Then we throw them into a room with other tied dogs and cats. What happens is that all the cats wind up on one side of the room and the dogs on the other, so they don’t have to look at each other.”

Gomez believes this process could enable him to build molecules programmed to self-assemble into electronic structures at very low cost.

“The overarching problem,” Gomez continues, “is figuring out how to design the molecule and then tickle it with pressure, temperature and electrical fields to form useful structures. We don’t really understand enough to do that yet.”

Despite the challenge, flexible electronics promise changes that go beyond folding displays, inexpensive solar cells, antennas and sensors. They could veer off in some unexpected directions, such as helping paraplegics walk again.

Mimicking Jell-O

That is the goal of Bruce Gluckman, associate director of Penn State’s Center for Neural Engineering. To get there, he must learn how the brain’s neurons collaborate.

“Computations happen at the level of single neurons that connect to other neurons. Half the brain is made up of the wiring for these connections, and any cell can connect to a cell next to it or to a cell across the brain. It’s not local in any sense,” he explains.

Scientists measure the electrical activity of neurons by implanting silicon electrodes into the brain. Unfortunately, Gluckman says, the brain is as spongy as Jell-O and the electrode is as stiff as a knife.

Plunging the electrode into the brain causes damage immediately. Every time the subject moves its head, the brain pulls away from the electrode on one side and makes better contact on the other. It takes racks of electronics to separate the signal from the noise of such inconsistent output.

“This is why we need something other than silicon,” Gluckman says.

Despite the challenge, flexible electronics promise changes that go beyond folding displays, inexpensive solar cells, antennas and sensors. They could veer off in some unexpected directions, such as helping paraplegics walk again.

Flexible electronics would better match the brain’s springiness. While some researchers are looking at all-organic electrodes, Gluckman believes they are too large and too slow to achieve the resolution he needs. Instead, he has teamed with Jackson to develop a flexible electrode based on zinc oxide, a faster semiconductor that can be deposited on plastic at low temperatures.

The work is still in its early stages, but Gluckman believes they can develop a reliable electrode that lasts for years and produces stronger, clearer signals.

Researchers have already demonstrated that humans can control computer cursors, robotic arms and even artificial voice boxes with today’s problematic electrodes. Yet the results are often short-lived.

“No one is going to let you operate on their brain twice,” Gluckman says. “If you want to directly animate limbs with an implant, the implant has to last the life of the patient. If we can do that, we can enable paraplegics to get around on their own.”

As Jackson notes, computers and smartphones may have powered silicon’s development, but the results are visible in everything from cars and digital thermometers to toys and even greeting cards.

Displays and solar cells are likely to power the new generation of flexible electronics, but brain implants are just one of the many unexpected directions they may take.

Enrique Gomez is assistant professor of chemical engineering, edg12@psu.edu. Thomas N. Jackson is Kirby Chair Professor of Electrical Engineering, tnj1@psu.edu. Susan Trolier-McKinstry is professor of ceramic science and engineering and director of the W.M. Keck Smart Materials Integration Laboratory, STMcKinstry@psu.edu. Bruce Gluckman is associate director of Penn State’s Center for Neural Engineering, bjg18@psu.edu.

What Nano-Science can do to Change our Future for the Better:

Heiner Linke

Heiner Linke is a Professor of Nanophysics and the Deputy Director of the Nanometer Structure Consortium at Lund university. Heiner is talking about the possibilities of nano-science to confront future challenges such as energy conservation and environmentally friendly energy production.