Read Genesis Nanotech Online: Latest “Nano-News” and Updates

MIT-Convergence_0 070516

Read Today’s Top Stories in Nanotechnology and the ‘Business’ of Nanotechnology. 

Stories about the Discoveries and Technologies that will reshape our world and drive New Economic Engines for the Future.

Read Genesis Nanotechnology Online Here

Stories Like:

Cancer 052716 nanoparticles-nanomedicineHacking metastasis: Nanotechnology researchers find new way to target tumors


and …

Canadas-flagCanadian Investors Need to Think Globally to Compete with US Counterparts


and much more …

GNT Thumbnail Alt 3 2015-page-001

Genesis Nanotechnology, Inc. ~ “Great Things from Small Things”

Facebook 042616.jpgFollow Us on Facebook

Twitter Icon 042616.jpgUp to the Minute Nanotech News on Our Twitter Feed

LinkedIn IconA 042316.jpg‘Link-Up” with Us on LinkedIn

 Website Icon 042616Connect with Our Website

YouTube small 050516Watch Our YouTube Video 


Catalyst efficiency improved for clean industries

Clean Caty 070916 160707151001_1_540x360Mobile platinum oxide species trapped on a cerium oxide surface. The bonding of the platinum to surface oxygens creates isolated platinum atoms that are thermally stable, and active for treatment of automotive exhaust pollutants.
Credit: Washington State University

Researchers have developed a way to use less platinum in chemical reactions commonly used in the clean energy, green chemicals, and automotive industries, according to a paper inScience.

Led by the University of New Mexico in collaboration with Washington State University, the researchers developed a unique approach for trapping platinum atoms that improves the efficiency and stability of the reactions.

Platinum is used as a catalyst in many clean energy processes, including in catalytic converters and fuel cells. The precious metal facilitates chemical reactions for many commonly used products and processes, such as converting poisonous carbon monoxide to less harmful carbon dioxide in catalytic converters.

Because of its expense and scarcity, industries are continually looking to use less of it and to develop catalysts that more efficiently use individual platinum atoms in their reactions. At high temperatures, however, the atoms become mobile and fly together into clumps, which reduces the catalyst’s efficiency and negatively impacts its performance. This is the primary reason why catalytic converters must be tested regularly to ensure they don’t become less effective over time.

“Precious metals are widely used in emission control, but there are always the issues of how to best utilize them and to keep them stable,” said Yong Wang, Voiland Distinguished Professor in the Gene and Linda Voiland School of Chemical Engineering and Bioengineering and a co-author on the paper. “You want to use as little as possible to achieve your objectives, but it’s normally hard to keep the atoms highly dispersed under working conditions.”

The University of New Mexico and WSU research team developed a method to capture the platinum atoms that keeps them stable and lets them continue their catalyzing activity. The researchers used a commonly-used and inexpensive manufacturing material, known as cerium oxide, to create a tiny, nano-scale trap. They shaped the cerium oxide into nanometer-sized rods and polyhedrons, which look like tiny pieces of rock candy, to capture the platinum atoms. With their large surface areas and sufficiently high number of defects, the cerium oxide nano-shapes are able to capture the platinum atoms on their surfaces and keep them from clumping together, so that the platinum can continue to do its work.

“The atom-trapping technique should be broadly applicable for preparing single-atom catalysts,” said Abhaya Datye, a Distinguished Regents’ Professor of Chemical and Biological Engineering at The University of New Mexico, who led the study. “It is remarkable that simply combining the ceria with a platinum catalyst was sufficient to allow trapping of the atoms and retaining the performance of the catalyst.

“Even more surprising is that the process of trapping occurs by heating the catalyst to high temperatures — precisely the conditions used for accelerated aging of these catalysts,” he added.

Adding the cerium oxide to the catalyst is a simple process, too, with no exotic precursors needed.

“This work provides the guiding principles, so that industry can design catalysts to better utilize precious metals and keep them much more stable,” added Wang.

Story Source:

The above post is reprinted from materials provided byWashington State University. The original item was written by Tina Hilding. Note: Materials may be edited for content and length.

Journal Reference:

  1. J. Jones, H. Xiong, A. T. DeLaRiva, E. J. Peterson, H. Pham, S. R. Challa, G. Qi, S. Oh, M. H. Wiebenga, X. I. Pereira Hernandez, Y. Wang, A. K. Datye. Thermally stable single-atom platinum-on-ceria catalysts via atom trapping.Science, 2016; 353 (6295): 150 DOI:10.1126/science.aaf8800

“It’s ALL .. About the WATER! 2 Maps that Show the next Potential Catastrophe Affecting the Middle East: Solving the World’s Water Crisis

World ME Water 070816 screen shot 2016-07-08 at 11.27.43 am


As sectarian strife spearheaded by ISIS convulses the Middle East, and tensions between Iran and Saudi Arabia only deepen, it is hard to imagine that a far more pressing concern could be threatening the region.

But a series of maps from the UN show that despite the awful suffering already occurring throughout the Middle East, things could always become significantly worse. The central issue that will affect the region, vast swathes of North and East Africa, and even Central Asia and China is the increasing strain on and lack of the world’s single most important resource — water.

The following map from the UN Water’s 2015 World Water Development Report shows the total amount of renewable water sources per capita available in each country in the world. In 2013, a number of countries — including regional heavyweights such as Saudi Arabia and Jordan — were facing absolute water scarcity.

Egypt, the most populous country in the Middle East, faced water scarcity, as did Syria and Sudan. In Sudan, the lack of water is believed to be one of the root causes of the continuing conflict in the Darfur region as various groups have continued to compete over the increasingly scarce resource.

And the UN predicts that water scarcity will only intensify. By 2030, UN Water predicts that the world will “face a 40% global water deficit under the business-as usual [sic]” scenario. This strain on water, unless proactively addressed, will only cause further inter- and intra-state conflicts.

Again, according to UN Water, “inter-state and regional conflicts may also emerge due to water scarcity and poor management structures. It is noteworthy that 158 of the world’s 263 transboundary water basins lack any type of cooperative management framework.”

Essentially, the world as it currently is will continue to face worse water crises. These crises will force states, or individuals within states, to go to extreme lengths to survive. And without significant frameworks in place, people may resort to conflict for survival.

The following map, from the UN World Water Development Report 2016, shows the proportion of renewable water resources that have already been withdrawn. The Middle East and Central Asia is again at significant risk, as the majority of countries in both regions have withdrawn more than 60% of their water resources.

World Water II 070816 screen shot 2016-07-08 at 11.26.27 am



Green nature landscape with planet Earth


Read More on How New (Nano) Technology Can Help Solve Our Looming Water Issues

MIT Solar Water Power splashMIT: How can we Use Renewable Energy to Solve the Water Crisis – Solar Desalination? (Video)

Published 2015 Water scarcity is a growing problem across the world. John H. Lienhard V and his team at MIT are exploring how to make renewable energy more efficient and affordable. Mechanical engineers and nanotechnologists are looking at different methods, including solar desalination. COMING ~ JUNE 2015 ~ “Great Things from Small Things” ~ Watch […]

Graphene Nano Membrane 071615How Graphene Desalination Could Solve Our Planet’s Water Supply Problems: Video

World WAter Short Map 033016 uci_news_image_downloadNanoscale Desalination of Seawater Through Nanoporous Graphene

Perhaps the most repeated words in the last few years when talking about graphene — since scientists Geim and Novoselov were awarded the Nobel Prize in Physics in 2010 for their groundbreaking experiments — are “the material of the future”. There are some risks regarding so many expectations about everything related to […]

Silver Nano P clean-drinking-water-indiaNanotechnology to provide efficient, inexpensive water desalination

A Faster Future: Graphene Based Optoelectronics: Three Major University Programs Collaborate

Graphene Optics 070716 160706114900_1_540x360

Graphene based Schottky Photodetector device.
Credit: Dr Ilya Goykhman, Cambridge Graphene Centre, University of Cambridge

As an important step towards graphene integration in silicon photonics, researchers from the Graphene Flagship have published a paper which shows how graphene can provide a simple solution for silicon photodetection in the telecommunication wavelengths. Published inNano Letters, this exciting research is a collaboration between the University of Cambridge (UK), The Hebrew University (Israel) and John Hopkins University (USA).

The mission of the Graphene Flagship is to translate graphene out of the academic laboratory, through industry and into society. This broad and ambitious aim has been at the forefront of the choices made to direct the Flagship; it focuses on real problem areas where it can make a real difference such as in Optical Communications.

Optical Communications are increasingly important because they have the potential to solve one of the biggest problems of our information age: energy consumption. Almost everything we do in everyday life consumes information and all of this information is powered by energy. If we want more and more information, we need more and more energy. In the near future, the major consumers of data traffic will be machine-to-machine communication and the Internet of Things (IoT).

To enable the IoT and the level of information it requires, current silicon photonics has a problem: it needs ten times more energy than we can provide. So, if we want this new, improved internet age, new technological, power-efficient solutions need to be found. This is why the drive to graphene-based optical communication is so important.

Over the last few years, optical communications have increased their viability over standard metal-based electronic interconnects. The current silicon-based photodetector used in optical communications has a major issue when it comes to detecting data in the near infrared range, which is the range used for telecommunications. The telecom industry has overcome this problem by integrating germanium absorbers with the standard silicon photonic devices. They have been able to make fully functioning devices on chips using this process. However, this process is complex.

In the new paper, graphene is interfaced with silicon on chip to make high responsivity Schottky barrier photodetectors. These graphene-based photodetectors achieve 0.37A/W responsivity at 1.55μm using avalanche multiplication. This high responsivity is comparable to that of the Silicon Germanium detectors currently used in silicon photonics.

Three Major University Programs Collaborate
  • University of Cambridge (UK)
  • The Hebrew University (Israel)
  • John Hopkins University (USA).


Prof. Andrea Ferrari from the Cambridge Graphene Centre, who is also the Science and Technology Officer and the Chair of the Management Panel for the Graphene Flagship stated; “This is a significant result which proves that graphene can compete with the current state of the art by producing devices that can be made more simply, cheaply and work at different wavelengths. Thus paving the way for graphene integrated silicon photonics.”

Dr Ilya Goykhman, from the University of Cambridge, and the paper’s lead author, said; “The vision here is for graphene to play an important part in enabling optical communication technologies. This is a first step towards this, and, over the next two years the aim of the wafer-scale integration and optoelectronics work-packages of the Flagship is to really make this happen.”

Talking further about the Graphene Flagship and its collaborative approach to research, Prof Ferrari commented “Graphene can beat current silicon photonic technology in terms of energy consumption. The Graphene Flagship is investing a lot of resources into wafer-scale integration with the creation of a new Work Package. We have identified a vision, where graphene is the backbone for data communication, and we plan to have a telecommunication bank capable of transferring 4×28 GB/s by 2018. The research in this Nano Letters paper is the first step towards achieving that vision, the importance of which is clearly recognised by companies such as Ericsson and Alcatel-Lucent who have joined the Flagship to help develop it.”

“We have shown the potential for the detector but we also need to produce a graphene-based modulator to have a full, low energy optical telecommunication system and the Flagship is working hard on this problem. The Flagship has collected the right people in the right place at the right time to work together towards this goal. Europe will be at the cutting edge of this technology. It is a great challenge, and a great opportunity for Europe, as there is such high added value to the devices it will be cost effective to manufacture the device in Europe — keeping the value of the technology within the European community,” said Prof. Ferrari.

Story Source:

The above post is reprinted from materials provided byGraphene Flagship. Note: Materials may be edited for content and length.

Journal Reference:

  1. Ilya Goykhman, Ugo Sassi, Boris Desiatov, Noa Mazurski, Silvia Milana, Domenico de Fazio, Anna Eiden, Jacob Khurgin, Joseph Shappir, Uriel Levy, Andrea C. Ferrari. On-Chip Integrated, Silicon–Graphene Plasmonic Schottky Photodetector with High Responsivity and Avalanche Photogain. Nano Letters, 2016; 16 (5): 3005 DOI:10.1021/acs.nanolett.5b05216

Michigan Tech U: Understanding the physics of “Quantum Tunneling” for Ultra-Thin Nano-Wire Transistors

Nanowire 070716 probingquantA field effect transistor (FET) uses a gate bias to control electrical current in a channel between a source and drain, which produces an electrostatic field around the channel. Credit: Michigan Technological University

Nearly 1,000 times thinner than a human hair, nanowires can only be understood with quantum mechanics. Using quantum models, physicists from Michigan Technological University have figured out what drives the efficiency of a silicon-germanium (Si-Ge) core-shell nanowire transistor.

Core-Shell Nanowires

The study, published last week in Nano Letters, focuses on the quantum tunneling in a core-shell nanowire structure. Ranjit Pati, a professor of physics at Michigan Tech, led the work along with his graduate students Kamal Dhungana and Meghnath Jaishi.

Core-shell nanowires are like a much smaller version of electrical cable, where the core region of the cable is made up of different material than the shell region. In this case, the core is made from silicon and the shell is made from germanium. Both silicon and germanium are semiconducting materials. Being so thin, these semiconducting core-shell nanowires are considered one-dimensional materials that display unique physical properties.

The arrangements of atoms in these nanowires determine how the electrons traverse through them, Pati explains, adding that a more comprehensive understanding of the physics that drive these nanoscale transistors could lead to increased efficiency in electronic devices.

“The performance of a heterogeneous silicon-germanium nanowire transistor is much better than a homogeneous silicon nanowire,” Pati says. “In our study, we’ve unraveled the quantum phenomena responsible for its superior performance.”

Field Effect Transistors

Transistors power our digital world. And they used to be large—or at least large enough for people to see. With advances in nanotechnology and materials science, researchers have been able to minimize the size and maximize the numbers of transistors that can be assembled on a microchip.

The particular transistor that Pati has been working on is a field effect transistor (FET) made out of core-shell nanowires. It manipulates the in the nanowire channel using a gate bias. Simply put, a gate bias affects electric current in the channel like a valve controls water flow in a pipe. The gate bias produces an electrostatic field effect that induces a switching behavior in the channel current. Controlling this field can turn the device on or off, much like a light switch.

Probing quantum phenomena in tiny transistors
Quantum tunneling of electrons across germanium atoms in a core-shell nanowire transistor. The close-packed alignment of dumbbell-shaped pz-orbitals direct the physics of tunneling. Credit: Michigan Technological University

Several groups have successfully fabricated core-shell nanowire FETs and demonstrated their effectiveness over the transistors currently used in microprocessors. What Pati and his team looked at is the quantum physics driving their superior performance.

The electrical current between source and drain in a nanowire FET cannot be understood using classical physics. That’s because electrons do strange things at such a tiny scale.

“Imagine a fish being trapped inside a fish tank; if fish has enough energy, it could jump up over the wall,” Pati says. “Now imagine an electron in the tank: if it has enough energy, the electron could jump out—but even if it doesn’t have enough energy, the electron can tunnel through the side walls, so there is a finite probability that we would find an electron outside the tank.”

This is known as quantum tunneling. For Pati, catching the electron in action inside the nanowire transistors is the key to understanding their superior performance. He and his team used what is called a first-principles quantum transport approach to know what causes the electrons to tunnel efficiently in the core-shell nanowires.

The quantum tunneling of electrons—an atomic-scale game of hopscotch—is what enables the electrons to move through the nanowire materials connecting the source and drain. And the movement gets more specific than that: the electrons almost exclusively hop across the germanium shell but not through the silicon core. They do so through the aligned pz-orbitals of the germanium.nanowires-149_thumbnail_100

Simply put, these orbitals, which are dumbbell-shaped regions of high probability for finding an electron, are perfect landing pads for tunneling electrons. The specific alignment—color-coded in the diagram above—makes even easier. It’s like the difference between trying to burrow through a well with steel walls versus sand walls. The close-packed alignment of the pz-orbitals in the germanium shell enable electrons to tunnel from one atom to another, creating a much higher electrical current when switched on. In the case of homogeneous silicon nanowires, there is no close-packed alignment of the pz-orbitals, which explains why they are less effective FETs.

Nanowires in Electronics

There are many potential uses for nanowire FETs. Pati and his team write in their Nano Letters paper that they “expect that the electronic orbital level understanding gained in this study would prove useful for designing a new generation of core−shell nanowire FETs.”

Specifically, having a heterogeneous structure offers additional mobility control and superior performance over the current generation of transistors, in addition to compatibility with the existing silicon technology. The core-shell nanowire FETs could transform our future by making computers more powerful, phones and wearables smarter, cars more interconnected and electrical grids more efficient. The next step is simply taking a small quantum leap.

Explore further: Universal transistor serves as a basis to perform any logic function

More information: Kamal B. Dhungana et al. Unlocking the Origin of Superior Performance of a Si–Ge Core–Shell Nanowire Quantum Dot Field Effect Transistor, Nano Letters (2016). DOI: 10.1021/acs.nanolett.6b00359


Penn State: New clues could help scientists harness the power of photosynthesis

Photsynth 070716 newcluescoul.jpgThis illustration is a model of Chl f synthase, potentially a ChlF dimer, based on the known X-ray structure of the core of the Photosystem II reaction center. Photosystem II is the light-driven enzyme that oxidizes water to produce oxygen …more

Identification of a gene needed to expand light harvesting in photosynthesis into the far-red-light spectrum provides clues to the development of oxygen-producing photosynthesis, an evolutionary advance that changed the history of life on Earth. “Knowledge of how photosynthesis evolved could empower scientists to design better ways to use light energy for the benefit of mankind,” said Donald A. Bryant, the Ernest C. Pollard Professor of Biotechnology and professor of biochemistry and molecular biology at Penn State University and the leader of the research team that made the discovery.

This discovery, which could enable scientists to engineer crop plants that more efficiently harness the energy of the Sun, will be published online by the journal Science on Thursday July 7, 2016.

“Photosynthesis usually ranks about third after the origin of life and the invention of DNA in lists of the greatest inventions of evolution,” said Bryant. “Photosynthesis was such a powerful invention that it changed the Earth’s atmosphere by producing oxygen, allowing diverse and complex life forms—algae, plants, and animals—to evolve.”

The researchers identified the gene that converts chlorophyll a—the most abundant -absorbing pigment used by plants and other organisms that harness energy through —into chlorophyll f—a type of chlorophyll that absorbs light in the far-red range of the light spectrum. There are several different types of chlorophyll, each tuned to absorb light in different wavelengths. Most organisms that get their energy from photosynthesis use light in the visible range, wavelengths of about 400 to 700 nanometers. Bryant’s lab previously had shown that chlorophyll f allows certain cyanobacteria—bacteria that use photosynthesis and that are sometimes called blue-green algae—to grow efficiently in light just outside of the usual human visual range—far-red light (700 to 800 nanometers). The ability to use light wavelengths other than those absorbed by plants, algae, and other cyanobacteria confers a powerful advantage to those organisms that produce chlorophyll f—they can survive and grow when the visible light they normally use is blocked.

New clues could help scientists harness the power of photosynthesis
This illustration shows the newly discovered evolutionary scheme for the type-1 and type-2 reaction centers of photosynthesis. Reaction centers are protein machines that convert light energy into stable reductants that can be used by cells …more


“There is nearly as much energy in the far-red and near-infrared light that reaches the Earth from the Sun as there is in visible light,” said Bryant. “Therefore, the ability to extend in plants into this range would allow the plants to more efficiently use the energy from the Sun and could increase plant productivity.”

The gene the researchers identified encodes an enzyme that is distantly related to one of the main components of the protein machinery used in oxygen-producing photosynthesis. The researchers showed that the conversion of chlorophyll a to chlorophyll f requires only this one enzyme in a simple system that could represent an early intermediate stage in the evolution of photosynthesis. Understanding the mechanism by which the enzyme functions could provide clues that enable scientists to design better ways to use light energy.

“There is intense interest in creating as an alternative energy source,” said Bryant. “Understanding the evolutionary trajectory that nature used to create oxygen production in photosynthesis is one component that will help scientists design an efficient and effective system. The difficulty is that photosynthesis is an incredibly complex process with hundreds of components and, until now, there were few known intermediate stages in its evolution. The simple system that we describe in this paper provides a model that can be further manipulated experimentally for studying those early stages in the evolution of photosynthesis.”

By disabling the gene that encodes the enzyme in two cyanobacteria that normally produce chlorophyll f, the researchers demonstrated that the enzyme is required for the production of chlorophyll f. The experiment showed that, without this enzyme, these cyanobacteria could no longer synthesize chlorophyll f. By artificially adding the gene that encodes the enzyme, the researchers also showed that this one enzyme is all that is necessary to convert cyanobacteria that normally do not produce chlorophyll f into ones that can produce it.

Another clue that the newly identified enzyme could represent an early stage in the evolution of photosynthesis is that the enzyme requires light to catalyze its reaction and may not require oxygen, as scientists had previously suspected. “Because the enzyme that synthesizes chlorophyll f requires light but may not require oxygen for its activity, it is possible that it evolved before Photosystem II, the photosynthetic complex that produces oxygen and to which the enzyme is related. If the enzyme is an evolutionary predecessor of Photosystem II, then evolution borrowed an enzyme that was originally used for synthesis and used it to evolve an that could produce oxygen, which ultimately led to changes in Earth’s atmosphere,” said Bryant.

Explore further: Hot-spring bacteria reveal ability to use far-red light for photosynthesis

More information: “Light-dependent chlorophyll f synthase is a highly divergent paralog of PsbA of photosystem II,” Science,


MIT: A Research“convergence” to Transform Biomedicine

MIT-Convergence_0 070516

The Future of Health,” draws on insights from scientists and researchers across academia, industry, and government. The cover of the report is an image of lungs, one of the major sites of metastasis. To help protect the lungs (blue) from this deadly process, bioengineers have created microscopic drug depots (red) to focus the effect of anti-cancer drugs that may have limited or toxic effects when delivered to the whole body

Image: Gregory Szeto, Adelaide Tovar, Jeffrey Wyckoff, Irvine Laboratory/Koch Institute at MIT

Report calls for more integration of physical, life sciences for needed advances in biomedical research.

What if lost limbs could be regrown? Cancers detected early with blood or urine tests, instead of invasive biopsies? Drugs delivered via nanoparticles to specific tissues or even cells, minimizing unwanted side effects? While such breakthroughs may sound futuristic, scientists are already exploring these and other promising techniques.

But the realization of these transformative advances is not guaranteed. The key to bringing them to fruition, a landmark new report argues, will be strategic and sustained support for “convergence”: the merging of approaches and insights from historically distinct disciplines such as engineering, physics, computer science, chemistry, mathematics, and the life sciences.

The report, “Convergence: The Future of Health,” was co-chaired by Tyler Jacks, the David H. Koch Professor of Biology and director of MIT’s Koch Institute for Integrative Cancer Research; Susan Hockfield, noted neuroscientist and president emerita of MIT; and Phillip Sharp, Institute Professor at MIT and Nobel laureate, and will be presented at the National Academies of Sciences, Engineering, and Medicine in Washington on June 24.

The report draws on insights from several dozen expert participants at two workshops, as well as input from scientists and researchers across academia, industry, and government. Their efforts have produced a wide range of recommendations for advancing convergence research, but the report emphasizes one critical barrier above all: the shortage of federal funding for convergence fields.

“Convergence science has advanced across many fronts, from nanotechnology to regenerative tissue,” says Sharp. “Although the promise has been recognized, the funding allocated for convergence research in biomedical science is small and needs to be expanded. In fact, there is no federal agency with the responsibility to fund convergence in biomedical research.”

The National Institutes of Health (NIH) are the primary source of research funding for biomedical science in the United States. In 2015, only 3 percent of all principal investigators funded by NIH were from departments of engineering, bioengineering, physics, biophysics, or mathematics. Accordingly, the report’s authors call for increasing NIH funding for convergence research to at least 20 percent of the agency’s budget.

Progress and potential

In 2011, MIT released a white paper that outlined the concept of convergence. More than just interdisciplinary research, convergence entails the active integration of these diverse modes of inquiry into a unified pursuit of advances that will transform health and other sectors, from agriculture to energy.

The new report lays out a more comprehensive vision of what convergence-based research could achieve, as well as the concrete steps required to enable these advances.

“The 2011 report argued that convergence was the next revolution in health research, following molecular biology and genomics,” says Jacks. “That report helped identify the importance and growing centrality of convergence for health research. This report is different. It starts us off on a true strategy for convergence-based research in health.”

The report released today makes clear that, despite such obstacles, this “third revolution” is already well underway. Convergence-based research has become standard practice at MIT, most notably at the Koch Institute and the Institute for Medical Engineering and Science.

“About a third of all MIT engineers are involved in some aspect of convergence,” says Sharp. “These faculty are having an enormous impact on biomedical science and this will only grow in the future. Other universities are beginning to evolve along similar paths.”

Indeed, convergence-based approaches are becoming more common at many other pioneering university programs, including the Wyss Institute for Biologically Inspired Engineering at Harvard University, and the University of Chicago’s new Institute for Molecular Engineering, among others.

The report also points to several new federal initiatives that are harnessing the convergence research model to solve some of society’s most pressing health challenges.

For example, the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, launched by the Obama administration in 2013, seeks to improve our understanding of how individual cells and neural circuits interact, in order to develop new ways to treat and prevent brain disorders. And the National Cancer Moonshot Initiative, launched earlier this year to accelerate research to develop cancer vaccines and early detection methods and genomic tumor analysis, will also operate largely using convergence tools and approaches.

But the integration of new technologies and methods from genomics, information science, nanotechnology, and molecular biology could take us even farther.

The report outlines three major disease areas — brain disorders, infectious diseases and immunology, and cancer — and promising convergence-based approaches to tackling them. It also presents case studies of four emerging technology categories: advanced imaging in the body, nanotechnology for drug and therapy delivery, regenerative engineering, and big data and health information technology.

A sampling gives a sense of their transformative potential. Convergence techniques could enable rewiring the genes of mosquitoes to eliminate Zika, dengue, and malaria. They could help solve the emerging threat of drug-resistant bacterial strains, which infect over two million people in the U.S. every year. Convergence-based immunotherapy could activate a person’s immune system to fight cancer, reprogramming a person’s T-cells or antibodies to find and attack tumor cells. Big-data techniques could be used to generate and analyze huge amounts of data on people’s exposures to industrial chemicals, environmental toxins, and infectious agents, creating a new field of “chemistry of nurture,” to complement the “chemistry of nature” developed by the documentation of the human genome.

“Convergence might come just in time,” says Hockfield, “given our rapidly aging population, increasing levels of chronic disease, and mounting healthcare costs due to demographic trends throughout the developed world. But we must overcome significant barriers to get to convergence.”

Cultivating convergence

Realizing the full potential of the convergence revolution will require much more ambitious and strategic coordination and collaboration across industry, government, and academia, the report argues.

The report accordingly calls for a concerted joint effort by federal agencies, universities, and industry to develop a new strategic roadmap to support convergence-based research. As a concrete next step, the report’s authors recommend establishing an interagency working group on convergence with participation from NIH, the National Science Foundation, and other federal agencies involved in funding scientific research, such as the Food and Drug Administration and the Department of Energy.

Other pressing challenges include grant review processes based on narrow, outdated disciplinary structures, which limit the availability of resources for cross-functional research teams. The report also proposes new practices to foster “cultures of convergence” within academic institutions: cross-department hiring and tenure review, convergence “cluster hiring” and career grants, and new PhD programs wherein students design their own degree programs across disciplinary boundaries.

If the potential of convergence is great, so are the stakes.

“Convergence has grown from a little seedling to a sprouting plant, but to become a great tree and orchard yielding fruit for decades into the future, it needs to be nourished, expanded, and cultivated now,” says Sharp. “Students need to be educated, collaborations need to be encouraged, and resources need to be committed to make sure convergence thrives.”

“This integration is important to deal with the great challenges of the future: continued growth in the accessibility and quality of healthcare, growth of the economy, and providing resources for future populations.”

Funding for the report was provided by the Raymond and Beverly Sackler Foundation, The Kavli Foundation, and the Burroughs Wellcome Fund.

University of California / University of Georgia: Integrated Trio of 2D Nanomaterials Unlocks Graphene Electronics Applications

Graphene 2D 070516 integratedtrAlexander Balandin (left) and Guanxiong Liu fabricated the voltage-controlled oscillator device in a cleanroom at the UCR’s Center for Nanoscale Science and Engineering (CNSE). Credit: UC Riverside.

Graphene has emerged as one of the most promising two-dimensional crystals, but the future of electronics may include two other nanomaterials, according to a new study by researchers at the

In research published Monday (July 4) in the journal Nature Nanotechnology, the researchers described the integration of three very different two-dimensional (2D) materials to yield a simple, compact, and fast voltage-controlled oscillator (VCO) device. A VCO is an electronic oscillator whose oscillation frequency is controlled by a voltage input.

2D Graphene II 070516 bilayer-graphene-cvdTitled “An integrated Tantalum Sulfide—Boron Nitride—Graphene Oscillator: A Charge-Density-Wave Device Operating at Room Temperature,” the paper describes the development of the first useful device that exploits the potential of charge-density waves to modulate an electrical current through a 2D material. The new technology could become an ultralow power alternative to conventional silicon-based devices, which are used in thousands of applications from computers to clocks to radios. The thin, flexible nature of the device would make it ideal for use in wearable technologies.

Graphene, a single layer of carbon atoms that exhibits exceptional electrical and thermal conductivities, shows promise as a successor to silicon-based transistors. However, its application has been limited by its inability to function as a semiconductor, which is critical for the ‘on-off’ switching operations performed by electronic components.

To overcome this shortfall, the researchers turned to another 2D nanomaterial, Tantalum Sulfide (TaS2). They showed that voltage-induced changes in the atomic structure of the ‘1T prototype’ of TaS2 enable it to function as an electrical switch at room temperature—a requirement for practical applications.

“There are many charge-density wave materials that have interesting electrical switching properties. However, most of them reveal these properties at very low temperature only. The particular polytype of TaS2 that we used can have abrupt changes in resistance above room temperature. That made a crucial difference,” said Alexander Balandin, UC presidential chair professor of electrical and computer engineering in UCR’s Bourns College of Engineering, who led the research team.

To protect the TaS2 from environmental damage, the researchers coated it with another 2D material, hexagonal boron nitrate, to prevent oxidation. By pairing the boron nitride-capped TaS2 with graphene, the team constructed a three-layer VCO that could pave the way for post-silicon electronics. In the proposed design, graphene functions as an integrated tunable load resistor, which enables precise voltage control of the current and VCO frequency. The prototype UCR devices operated at MHz frequency used in radios, and the extremely fast physical processes that define the device functionality allow for the operation frequency to increase all the way to THz.2D Graphene 070516 13f9e6e6995

Balandin said the integrated system is the first example of a functional voltage-controlled oscillator device comprising 2D materials that operates at .

“It is difficult to compete with silicon, which has been used and improved for the past 50 years. However, we believe our device shows a unique integration of three very different 2D materials, which utilizes the intrinsic properties of each of these materials. The device can potentially become a low-power alternative to conventional silicon technologies in many different applications,” Balandin said.

The electronic function of graphene envisioned in the proposed 2D device overcomes the problem associated with the absence of the energy band gap, which so far prevented graphene’s use as the transistor channel material. The extremely high of graphene comes as an additional benefit in the device structure, by facilitating heat removal. The unique heat conduction properties of graphene were experimentally discovered and theoretically explained in 2008 by Balandin’s group at UCR. The Materials Research Society recognized this groundbreaking achievement by awarding Balandin the MRS Medal in 2013.

The Balandin group also demonstrated the first integrated graphene heat spreaders for high-power transistors and light-emitting diodes. “In those applications, graphene was used exclusively as heat conducting material. Its thermal conductivity was the main property. In the present , we utilize both electrical and thermal conductivity of graphene,” Balandin added.

Explore further: Hot new material can keep electronics cool: Few atomic layers of graphene reveal unique thermal properties

More information: Guanxiong Liu et al, A charge-density-wave oscillator based on an integrated tantalum disulfide–boron nitride–graphene device operating at room temperature, Nature Nanotechnology (2016). DOI: 10.1038/NNANO.2016.108


Quantum Computing: What is it? How does it work? What will it Mean to Us?

quantum c 070216 safe_image (1)

Quantum computing is the area of study focused on developing computer technology based on the principles of quantum theory. The quantum computer, following the laws of quantum physics, would gain enormous processing power through the ability to be in multiple states, and to perform tasks using all possible permutations simultaneously.



A Comparison of Classical and Quantum Computing

Classical computing relies, at its ultimate level, on principles expressed by Boolean algebra. Data must be processed in an exclusive binary state at any point in time or bits. While the time that each transistor or capacitor need be either in 0 or 1 before switching states is now measurable in billionths of a second, there is still a limit as to how quickly these devices can be made to switch state. As we progress to smaller and faster circuits, we begin to reach the physical limits of materials and the threshold for classical laws of physics to apply. Beyond this, the quantum world takes over.

In a quantum computer, a number of elemental particles such aselectrons or photons can be used with either their charge or polarizationacting as a representation of 0 and/or 1. Each of these particles is known as a quantum bit, or qubit, the nature and behavior of these particles form the basis of quantum computing.

Quantum Superposition and Entanglement

The two most relevant aspects of quantum physics are the principles ofsuperposition and entanglement.

  • Superposition: Think of a qubit as an electron in a magnetic field. The electron’s spin may be either in alignment with the field, which is known as a spin-up state, or opposite to the field, which is known as a spin-down state. According to quantum law, the particle enters a superposition of states, in which it behaves as if it were in both states simultaneously. Each qubit utilized could take a superposition of both 0 and 1.
  • Entanglement: Particles that have interacted at some point retain a type of connection and can be entangled with each other in pairs, in a process known as correlation. Knowing the spin state of one entangled particle – up or down – allows one to know that the spin of its mate is in the opposite direction. Quantum entanglement allows qubits that are separated by incredible distances to interact with each other instantaneously (not limited to the speed of light). No matter how great the distance between the correlated particles, they will remain entangled as long as they are isolated.

Taken together, quantum superposition and entanglement create an enormously enhanced computing power. Where a 2-bit register in an ordinary computer can store only one of four binary configurations (00, 01, 10, or 11) at any given time, a 2-qubit register in a quantum computer can store all four numbers simultaneously, because each qubit represents two values. If more qubits are added, the increased capacity is expanded exponentially.

Difficulties with Quantum Computers

  • Interference – During the computation phase of a quantum calculation, the slightest disturbance in a quantum system (say a stray photon or wave of EM radiation) causes the quantum computation to collapse, a process known as de-coherence. A quantum computer must be totally isolated from all external interference during the computation phase.
  • Error correction – Given the nature of quantum computing, error correction is ultra critical – even a single error in a calculation can cause the validity of the entire computation to collapse.
  • Output observance – Closely related to the above two, retrieving output data after a quantum calculation is complete risks corrupting the data.

The Future of Quantum Computing

The biggest and most important one is the ability to factorize a very large number into two prime numbers. That’s really important because that’s what almost all encryption of internet applications use and can be de-encrypted. A quantum computer should be able to do that relatively quickly. Calculating the positions of individual atoms in very large molecules like polymers and in viruses. The way that the particles interact with each other – if you have a quantum computer you could use it to develop drugs and understand how molecules work a bit better. quantum c II 070216 safe_image (1)

Even though there are many problems to overcome, the breakthroughs in the last 15 years, and especially in the last 3, have made some form of practical quantum computing possible. However, the potential that this technology offers is attracting tremendous interest from both the government and the private sector. It is this potential that is rapidly breaking down the barriers to this technology, but whether all barriers can be broken, and when, is very much an open question.

Contributed by Ahmed Banafa


You Might Never Need Another Root Canal With This New Kind of Filling

There’s lots of fun things you could be doing this weekend, barring unexpected misfortune—like needing an emergency root canal. It’s arguably the most dreaded dental procedure, but if a promising new type of filling pans out, no one need ever suffer through this often-painful process again.

Perhaps you are one of the fortunate few to whom the concept of a root canal is a mystery. Let me enlighten you. Dentists typically treat cavities by drilling away the decay and plugging the hole with a filling made of gold, porcelain, a composite resin (tooth-colored fillings), or an amalgam of some sort (usually an alloy of mercury, silver, copper, tin and sometimes zinc).

But those fillings can fail, and when that happens, all the soft tissue at the center of the tooth—nerves, blood vessels, that sort of thing—can get infected, and eventually die. And more often than not, the tooth does not go gently into that good night. It may start as a dull ache, but quickly morphs into what I once described as a “throbbing undercurrent of inflamed rage.”

When that happens, you need a root canal to save the tooth. Once the patient is anesthetized, the dentist will open up the tooth and scrape out all the nerve tissue and pulp (blood vessels), then fill it with gutta percha and cap off the whole shebang with cement. With all the soft tissue gone, the pain subsides, but now there’s no blood flow to the dead tooth. 
To maintain its structural integrity, most dentists follow up with a pricey porcelain crown, a.k.a. that part of the procedure not covered by your medical insurance.

In short, root canals—even the milder variety—are no fun, and it would be awesome if we never had to deal with them again. So three cheers for a team of scientists from Harvard and the University of Nottingham, who’ve come up with a new type of synthetic biomaterial for fillings that is regenerative.

 It actually stimulates the growth of stem cells in the pulp, which could repair damage from tooth decay. Damaged tooth, heal thyself! This would greatly reduce the number of fillings that eventually fail, and hence would reduce the number of root canals.

Small wonder the team placed second in the materials category of this year’s Emerging Technologies Competition, sponsored by the Royal Society of Chemistry.