Nanoparticles Stagger Delivery of Two Drugs: Knock Out Aggressive Cancer Tumors


 

Nano Cancer id36068Chemotherapy timing is key to success: Nanoparticles that stagger delivery of two drugs knock out aggressive tumors in mice.

Cambridge, MA | Posted on May 8th, 2014

 

Abstract:


MIT researchers have devised a novel cancer treatment that destroys tumor cells by first disarming their defenses, then hitting them with a lethal dose of DNA damage.

In studies with mice, the research team showed that this one-two punch, which relies on a nanoparticle that carries two drugs and releases them at different times, dramatically shrinks lung and breast tumors. The MIT team, led by Michael Yaffe, the David H. Koch Professor in Science, and Paula Hammond, the David H. Koch Professor in Engineering, describe the findings in the May 8 online edition of Science Signaling.

“I think it’s a harbinger of what nanomedicine can do for us in the future,” says Hammond, who is a member of MIT’s Koch Institute for Integrative Cancer Research. “We’re moving from the simplest model of the nanoparticle — just getting the drug in there and targeting it — to having smart nanoparticles that deliver drug combinations in the way that you need to really attack the tumor.”

BioGraphene-320

Doctors routinely give cancer patients two or more different chemotherapy drugs in hopes that a multipronged attack will be more successful than a single drug. While many studies have identified drugs that work well together, a 2012 paper from Yaffe’s lab was the first to show that the timing of drug administration can dramatically influence the outcome.

In that study, Yaffe and former MIT postdoc Michael Lee found they could weaken cancer cells by administering the drug erlotinib, which shuts down one of the pathways that promote uncontrolled tumor growth. These pretreated tumor cells were much more susceptible to treatment with a DNA-damaging drug called doxorubicin than cells given the two drugs simultaneously.

“It’s like rewiring a circuit,” says Yaffe, who is also a member of the Koch Institute. “When you give the first drug, the wires’ connections get switched around so that the second drug works in a much more effective way.”

Erlotinib, which targets a protein called the epidermal growth factor (EGF) receptor, found on tumor cell surfaces, has been approved by the Food and Drug Administration to treat pancreatic cancer and some types of lung cancer. Doxorubicin is used to treat many cancers, including leukemia, lymphoma, and bladder, breast, lung, and ovarian tumors.

Staggering these drugs proved particularly powerful against a type of breast cancer cell known as triple-negative, which doesn’t have overactive estrogen, progesterone, or HER2 receptors. Triple-negative tumors, which account for about 16 percent of breast cancer cases, are much more aggressive than other types and tend to strike younger women.

That was an exciting finding, Yaffe says. “The problem was,” he adds, “how do you translate that into something you can actually give a cancer patient?”

From lab result to drug delivery

To approach this problem, Yaffe teamed up with Hammond, a chemical engineer who has previously designed several types of nanoparticles that can carry two drugs at once. For this project, Hammond and her graduate student, Stephen Morton, devised dozens of candidate particles. The most effective were a type of particle called liposomes — spherical droplets surrounded by a fatty outer shell.

The MIT team designed their liposomes to carry doxorubicin inside the particle’s core, with erlotinib embedded in the outer layer. The particles are coated with a polymer called PEG, which protects them from being broken down in the body or filtered out by the liver and kidneys. Another tag, folate, helps direct the particles to tumor cells, which express high quantities of folate receptors.

Once the particles reach a tumor and are taken up by cells, the particles start to break down. Erlotinib, carried in the outer shell, is released first, but doxorubicin release is delayed and takes more time to seep into cells, giving erlotinib time to weaken the cells’ defenses. “There’s a lag of somewhere between four and 24 hours between when erlotinib peaks in its effectiveness and the doxorubicin peaks in its effectiveness,” Yaffe says.

The researchers tested the particles in mice implanted with two types of human tumors: triple-negative breast tumors and non-small-cell lung tumors. Both types shrank significantly. Furthermore, packaging the two drugs in liposome nanoparticles made them much more effective than the traditional forms of the drugs, even when those drugs were given in a time-staggered order.

As a next step before possible clinical trials in human patients, the researchers are now testing the particles in mice that are genetically programmed to develop tumors on their own, instead of having human tumor cells implanted in them.

The researchers believe that time-staggered delivery could also improve other types of chemotherapy. They have devised several combinations involving cisplatin, a commonly used DNA-damaging drug, and are working on other combinations to treat prostate, head and neck, and ovarian cancers. At the same time, Hammond’s lab is working on more complex nanoparticles that would allow for more precise loading of the drugs and fine-tuning of their staggered release.

“With a nanoparticle delivery platform that allows us to control the relative rates of release and the relative amounts of loading, we can put these systems together in a smart way that allows them to be as effective as possible,” Hammond says.

Morton and Lee are the lead authors of the Science Signaling paper. Postdocs Zhou Deng, Erik Dreaden, and Kevin Shopsowitz, visiting student Elise Siouve, and graduate student Nisarg Shah also contributed to the research. The work was funded by the National Institutes of Health, the Center for Cancer Nanotechnology Excellence, and a Breast Cancer Alliance Exceptional Project Grant.

Written by Anne Trafton, MIT News Office

####

 

Copyright © Massachusetts Institute of Technology

Advertisements

Creating Selective-Sized “Nano” Holes in Graphene for Water Filtration


2x2-logo-sm.jpg
David L. Chandler, MIT News Office
New technique developed at MIT produces highly selective filter materials, could lead to more efficient desalination.
Researchers have devised a way of making tiny holes of controllable size in sheets of graphene, a development that could lead to ultrathin filters for improved desalination or water purification.

The team of researchers at MIT, Oak Ridge National Laboratory, and in Saudi Arabia succeeded in creating subnanoscale pores in a sheet of the one-atom-thick material, which is one of the strongest materials known. Their findings are published in the journal Nano Letters.

The concept of using graphene, perforated by nanoscale pores, as a filter in desalination has been proposed and analyzed by other MIT researchers. The new work, led by graduate student Sean O’Hern and associate professor of mechanical engineering Rohit Karnik, is the first step toward actual production of such a graphene filter.

MIT Nano Filter
The MIT researchers used a four-step process to create filters from graphene (shown here): (a) a one-atom-thick sheet of graphene is placed on a supporting structure; (b) the graphene is bombarded with gallium ions; (c) wherever the gallium ions strike the graphene, they create defects in its structure; and (d) when etched with an oxidizing solution, each of those defects grows into a hole in the graphene sheet. The longer the material stays in the oxidizing bath, the larger the holes get. Image courtesy of the researchers.

Making these minuscule holes in graphene — a hexagonal array of carbon atoms, like atomic-scale chicken wire — occurs in a two-stage process. First, the graphene is bombarded with gallium ions, which disrupt the carbon bonds. Then, the graphene is etched with an oxidizing solution that reacts strongly with the disrupted bonds — producing a hole at each spot where the gallium ions struck. By controlling how long the graphene sheet is left in the oxidizing solution, the MIT researchers can control the average size of the pores.

A big limitation in existing nanofiltration and reverse-osmosis desalination plants, which use filters to separate salt from seawater, is their low permeability: Water flows very slowly through them. The graphene filters, being much thinner, yet very strong, can sustain a much higher flow. “We’ve developed the first membrane that consists of a high density of subnanometer-scale pores in an atomically thin, single sheet of graphene,” O’Hern says.

For efficient desalination, a membrane must demonstrate “a high rejection rate of salt, yet a high flow rate of water,” he adds. One way of doing that is decreasing the membrane’s thickness, but this quickly renders conventional polymer-based membranes too weak to sustain the water pressure, or too ineffective at rejecting salt, he explains.

With graphene membranes, it becomes simply a matter of controlling the size of the pores, making them “larger than water molecules, but smaller than everything else,” O’Hern says — whether salt, impurities, or particular kinds of biochemical molecules.

The permeability of such graphene filters, according to computer simulations, could be 50 times greater than that of conventional membranes, as demonstrated earlier by a team of MIT researchers led by graduate student David Cohen-Tanugi of the Department of Materials Science and Engineering. But producing such filters with controlled pore sizes has remained a challenge. The new work, O’Hern says, demonstrates a method for actually producing such material with dense concentrations of nanometer-scale holes over large areas.

“We bombard the graphene with gallium ions at high energy,” O’Hern says. “That creates defects in the graphene structure, and these defects are more chemically reactive.” When the material is bathed in a reactive oxidant solution, the oxidant “preferentially attacks the defects,” and etches away many holes of roughly similar size. O’Hern and his co-authors were able to produce a membrane with 5 trillion pores per square centimeter, well suited to use for filtration. “To better understand how small and dense these graphene pores are, if our graphene membrane were to be magnified about a million times, the pores would be less than 1 millimeter in size, spaced about 4 millimeters apart, and span over 38 square miles, an area roughly half the size of Boston,” O’Hern says.

With this technique, the researchers were able to control the filtration properties of a single, centimeter-sized sheet of graphene: Without etching, no salt flowed through the defects formed by gallium ions. With just a little etching, the membranes started allowing positive salt ions to flow through. With further etching, the membranes allowed both positive and negative salt ions to flow through, but blocked the flow of larger organic molecules. With even more etching, the pores were large enough to allow everything to go through.

Scaling up the process to produce useful sheets of the permeable graphene, while maintaining control over the pore sizes, will require further research, O’Hern says.

Karnik says that such membranes, depending on their pore size, could find various applications. Desalination and nanofiltration may be the most demanding, since the membranes required for these plants would be very large. But for other purposes, such as selective filtration of molecules — for example, removal of unreacted reagents from DNA — even the very small filters produced so far might be useful.

“For biofiltration, size or cost are not as critical,” Karnik says. “For those applications, the current scale is suitable.”

Bruce Hinds, a professor of materials engineering at the University of Kentucky who was not involved in this work, says, “Previous groups had tried just ion bombardment or plasma radical formation.” The idea of combining these methods “is nice and has the potential to be fine-tuned.” While more work needs to be done to refine the technique, he says, this approach is “promising” and could ultimately help to lead to applications in “water purification, energy storage, energy production, [and] pharmaceutical production.”

The work also included Jing Kong, the ITT Career Development Associate Professor of Electrical Engineering; MIT graduate students Michael Boutilier and Yi Song; researcher Juan-Carlos Idrobo of the Oak Ridge National Laboratory; and professors Tahar Laoui and Muataz Atieh of the King Fahd University of Petroleum and Minerals (KFUPM). The project received support from the Center for Clean Water and Clean Energy at MIT and KFUPM and the U.S. Department of Energy.

MIT researchers discover platform that manipulates organic molecules’ emission


qdot-images-3.jpgEnhancing and manipulating the light emission of organic molecules is at heart of many important technological and scientific advances, including in the fields of organic light emitting devices, bio-imaging, bio-molecular detection. Researchers at MIT have now discovered a new platform that enables dramatic manipulation of the emission of organic molecules when simply suspended on top of a carefully designed planar slab with a periodic array of holes: so-called photonic crystal surface.

 

Influenced by the fast and directional emission channels (called ‘resonances’) provided by the photonic crystal surface, molecules in the solution that are suspended on top of the surface no longer behave in their usual fashion: instead of sending light isotropically into all directions, they rather send light into specific directions.

The researchers say that this platform could also be applied to enhance other type of interactions of light with matter, such as Raman scattering. Furthermore, this process applies to any other nano-emitters as well, such as quantum dots.

Physics Professors Marin Soljacic and John Joannopoulos, Associate Professor of Applied Mathematics Steven Johnson, Research scientist Dr. Ofer Shapira, Postdocs Dr. Alejandro Rodriguez, Dr. Xiangdong Liang, and graduate students Bo Zhen, Song-Liang Chua, Jeongwon Lee report this discovery as featured in Proceedings of the National Academy of Sciences.

“Most fluorescing molecules are like faint light bulbs uniformly emitting light into all directions,” says Soljacic. Researchers have often sought to enhance this emission by incorporating organic emitters into sub-wavelength structured cavities that are usually made out of inorganic materials. However, the challenge lies in an inherent incompatibility in the fabrication of cavities for such hybrid systems.

Zhen et al present a simple and direct methodology to incorporate the organic emitters into their structures. By introducing a microfluidic channel on top of the photonic crystal surface, organic molecules in solution are delivered to the active region where interaction with light is enhanced. Each molecule then absorbs and emits significantly more energy with an emission pattern that can be designed to be highly directional. “Now we can turn molecules from being simple light bulbs to powerful flashlights that are thousands of times stronger and can all be aligned towards the same direction,” says Shapira, the senior author of the paper.

This discovery lends itself to a number of practical applications. “During normal blood tests, for example,” adds Shapira, “cells and proteins are labeled with antibodies and fluorescing molecules that allow their recognition and detection. Their detection limit could be significantly improved using such a system due to the enhanced directional emission from the molecules.”

The researchers also demonstrated that the directional emission can be turned into organic lasers with low input powers. “This lasing demonstration truly highlights the novelty of this system,” says the first author Zhen. For almost any lasing system to work there is a barrier on the input power level, named the lasing threshold, below which lasing will not happen. Naturally, the lower the threshold, the less power it takes to turn on this laser. Exploring the enhancement mechanisms present in the current platform, lasing was observed with a substantially lower barrier than before: the measured threshold in this new system is at least an order of magnitude lower than any previously reported results using the same molecules.

Source: Massachusetts Institute of Technology, Institute for Soldier Nanotechnologies

Self-steering particles go with the flow


Asymmetrical particles could make lab-on-a-chip diagnostic devices more efficient and portable.

Anne Trafton, MIT News Office

stretchy-electronics-4MIT chemical engineers have designed tiny particles that can “steer” themselves along preprogrammed trajectories and align themselves to flow through the center of a microchannel, making it possible to control the particles’ flow through microfluidic devices without applying any external forces.

 

self aligning nano 20131108111613-0

A slightly asymmetrical particle flows along the center of a microfluidic channel

Such particles could make it more feasible to design lab-on-a-chip devices, which hold potential as portable diagnostic devices for cancer and other diseases. These devices consist of microfluidic channels engraved on tiny chips, but current versions usually require a great deal of extra instrumentation attached to the chip, limiting their portability.

Much of that extra instrumentation is needed to keep the particles flowing single file through the center of the channel, where they can be analyzed. This can be done by applying a magnetic or electric field, or by flowing two streams of liquid along the outer edges of the channel, forcing the particles to stay in the center.

The new MIT approach, described in Nature Communications, requires no external forces and takes advantage of hydrodynamic principles that can be exploited simply by altering the shapes of the particles.

Lead authors of the paper are Burak Eral, an MIT postdoc, and William Uspal, who recently received a PhD in physics from MIT. Patrick Doyle, the Singapore Research Professor of Chemical Engineering at MIT, is the senior author of the paper.

Exploiting asymmetry

The work builds on previous research showing that when a particle is confined in a narrow channel, it has strong hydrodynamic interactions with both the confining walls and any neighboring particles. These interactions, which originate from how particles perturb the surrounding fluid, are powerful enough that they can be used to control the particles’ trajectory as they flow through the channel.

The MIT researchers realized that they could manipulate these interactions by altering the particles’ symmetry. Each of their particles is shaped like a dumbbell, but with a different-size disc at each end.

When these asymmetrical particles flow through a narrow channel, the larger disc encounters more resistance, or drag, forcing the particle to rotate until the larger disc is lagging behind. The asymmetrical particles stay in this slanted orientation as they flow.

Because of this slanted orientation, the particles not only move forward, in the direction of the flow, they also drift toward one side of the channel. As a particle approaches the wall, the perturbation it creates in the fluid is reflected back by the wall, just as waves in a pool reflect from its wall. This reflection forces the particle to flip its orientation and move toward the center of the channel.

Slightly asymmetrical particles will overshoot the center and move toward the other wall, then come back toward the center again until they gradually achieve a straight path. Very asymmetrical particles will approach the center without crossing it, but very slowly. But with just the right amount of asymmetry, a particle will move directly to the centerline in the shortest possible time.

“Now that we understand how the asymmetry plays a role, we can tune it to what we want. If you want to focus particles in a given position, you can achieve that by a fundamental understanding of these hydrodynamic interactions,” Eral says.

“The paper convincingly shown that shape matters, and swarms can be redirected provided that shapes are well designed,” says Patrick Tabeling, a professor at the École Supérieure de Physique et de Chimie Industrielles in Paris, who was not part of the research team. “The new and quite sophisticated mechanism … may open new routes for manipulating particles and cells in an elegant manner.”

Diagnosis by particles

In 2006, Doyle’s lab developed a way to create huge batches of identical particles made of hydrogel, a spongy polymer. To create these particles, each thinner than a human hair, the researchers shine ultraviolet light through a mask onto a stream of flowing building blocks, or oligomers. Wherever the light strikes, solid polymeric particles are formed in the shape of the mask, in a process called photopolymerization.

During this process, the researchers can also load a fluorescent probe such as an antibody at one end of the dumbbell. The other end is stamped with a barcode — a pattern of dots that reveals the particle’s target molecule.

This type of particle can be useful for diagnosing cancer and other diseases, following customization to detect proteins or DNA sequences in blood samples that can be signs of disease. Using a cytometer, scientists can read the fluorescent signal as the particles flow by in single file.

“Self-steering particles could lead to simplified flow scanners for point-of-care devices, and also provide a new toolkit from which one can develop other novel bioassays,” Doyle says.

The research was funded by the National Science Foundation, Novartis, and the Institute for Collaborative Biotechnologies through the U.S. Army Research Office.

Reducing Energy Costs with Better Batteries


3adb215 D BurrisA better battery—one that is cheap and safe, but packs a lot of power—could lead to an electric vehicle that performs better than today’s gasoline-powered cars, and costs about the same or less to consumers.  Such a vehicle would reduce the United States’ reliance on foreign oil and lower energy costs for the average American, so one of the Department of Energy’s (DOE’s) goals is to fund research that will revolutionize the performance of next-generation batteries.

In honor of DOE’s supercomputing month, we are highlighting some of the ways researchers are using supercomputers at the National Energy Research Scientific Computing Center (NERSC) are working to achieve this goal.

New Anode Boots Capacity of Lithium-Ion Batteries

Lithium-ion batteries are everywhere— in smart phones, laptops, an array of other consumer electronics, and electric vehicles. Good as they are, they could be much better, especially when it comes to lowering the cost and extending the range of electric cars. To do that, batteries need to store a lot more energy.

Using supercomputers at NERSC, Berkeley Lab researchers developed a new kind of anode—energy storing component—that is capable of absorbing eight times the lithium of current designs. The secret is a tailored polymer that conducts electricity and binds closely to lithium storing particles. The researchers achieved this result by running supercomputer calculations of different promising polymers until they found the perfect one. This research is an important step toward developing lithium-ion batteries with eight times their current capacity.

After more than a year of testing and many hundreds of charge-discharge cycles, Berkeley researchers found that their anode maintained its increased energy capacity.  This is a significant improvement from many lithium-ion batteries on the market today, which degrade as they recharge. Best of all, the anodes are made from low-cost materials that are also compatible with standard lithium battery manufacturing technologies.

Read More: https://www.nersc.gov/news-publications/news/science-news/2011/a-better-lithium-ion-battery-on-the-way/

Engineering Better Energy Storage

One of the biggest weaknesses of today’s electric vehicles is battery life—most cars can only go about 100-200 miles between charges. But researchers hope that a new type of battery, called the lithium-air battery, will one day lead to a cost-effective, long-range electric vehicles that could travel 300 miles or more between charges.

Using supercomputers at NERSC and powerful microscopes, a team of researchers from the Pacific Northwest National Laboratory (PNNL) and Princeton University built a novel graphene membrane that could produce a lithium-air battery with the highest-energy capacity to date. Because the material does not rely on platinum or other precious metals, its potential cost and environmental impact are significantly less than current technology.

Read More: https://www.nersc.gov/news-publications/news/science-news/2012/bubbles-help-break-energy-storage-record-for-lithium-air-batteries/

Promise for Onion-Like Carbons as Supercapacitors

The two most important electrical storage technologies on the market today are batteries and capacitors—both have their pluses and minuses. Batteries can store a lot of energy, but have slow charge and discharge rates. While capacitors generally store less energy but have very fast (nearly instant) charge and discharge rates, and last longer than rechargeable batteries. Developing technologies that combine the optimal characteristics of both will require a detailed understanding of how these devices work at the molecular level. That’s where supercomputers come in handy.

One promising electrical storage device is the supercapacitator, which combines the fast charging and discharging rates of conventional capacitators, as well as the high-power density, high-capacitance (ability to store electrical charge), and durability of a battery. Today supercapacitators power electric vehicles, portable electronic equipment and various other devices. Despite their use in the marketplace, researchers believe these energy storage devices could perform much better. One area that they are hoping to improve is the device’s electrode, or a conductor through which electricity enters or leaves.

Most supercapacitor electrodes are made of carbon-based materials, but one promising material yet to be explored is graphene. The strongest material known, graphene also has unique electrical, thermal, mechanical and chemical properties. Using supercomputers at NERSC, scientists ran simulations to understand how the shape of a graphene electrode affects its electrical properties. They hope that one-day this work will inspire the design of supercapacitators that can hold a much more stable electric charge.

Read More: http://www.nersc.gov/news-publications/news/science-news/2012/why-onion-like-carbons-make-high-energy-supercapacitors/

A Systematic Approach to Battery Design

New materials are crucial for building advanced batteries, but today the development cycle is too slow. It takes about 15 to 18 years to go from conception to commercialization. To speed up this process, a team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and the Massachusetts Institute of Technology (MIT) created a new computational tool called the Materials Project, which is hosted at NERSC.

The Materials Project uses supercomputers at NERSC, Berkeley Lab and the University of Kentucky to characterize the properties of inorganic compounds—such as stability, voltage, capacity and oxidation state—via computer simulations. The results are then organized into a database with a user-friendly web interface that allows users to easily access and search for the compound that they would like to use in their new material design. Knowing the properties of a compound beforehand allows researchers to quickly assess whether their idea will be successful, without spending money and time developing prototypes and experiments that will eventually lead to a dead-end.

In early 2013, DOE pledged $120 million over five years to establish the Joint Center for Energy Storage Research (JCESR). As part of this initiative, the Berkeley Lab and MIT researchers will run simulations at NERSC to predict the properties of electrolytes—a liquid. The results will be incorporated into a database similar to the Materials Project. Eventually researchers will be able to combine the JCESR database with the Materials Project to get a complete scope of battery components. Together, these resources allow scientists to employ a systematic and predictive approach to battery design.

Read More: https://www.nersc.gov/news-publications/news/science-news/2012/nersc-helps-develop-next-gen-batteries/

For more information about how Berkeley Lab is celebrating DOE supercomputing month, please visit: http://cs.lbl.gov/news-media/news/2013/supercomputing-sept-2013/


About Berkeley Lab Computing Sciences

The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing  Sciences organization provides the computing and networking resources  and expertise critical to advancing the Department of Energy’s research  missions: developing new energy  sources, improving energy efficiency, developing new materials and  increasing our understanding of ourselves, our world and our universe. ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research  Scientific Computing Center (NERSC) powers the discoveries of 5,500 scientists at national laboratories and universities, including those at Berkeley Lab’s Computational Research Division (CRD). CRD  conducts research and development in mathematical modeling and  simulation, algorithm design, data storage, management and analysis,  computer system architecture and high-performance software  implementation.

Chlorine and Graphene Combine


Nanotubes imagesResearchers at the Massachusetts Institute of Technology in the US are reporting on a new way to p-dope graphene that does not sacrifice its excellent electronic properties too much – something that has proved to be somewhat of a challenge until now. The resulting material could be ideal for making all-graphene integrated circuits on a chip, radio-frequency transistors and nanoelectronic circuit interconnects to name a few examples.

In the lab

Graphene is a flat sheet of carbon just one atom thick – with the carbon atoms arranged in a honeycombed lattice. Since the material was first isolated in 2004, its unique electronic and mechanical properties, which include extremely high mobility, and high strength, have amazed researchers who say that it could be used in a host of device applications. Indeed, graphene might even replace silicon as the electronic material of choice in the future according to some. This is because electrons whiz through graphene at extremely high speeds, behaving like “Dirac” particles with no rest mass, a property that could allow for transistors that are faster than any existing today.

However, unlike the semiconductor silicon, graphene has no gap between its valence and conduction bands. Such a bandgap is essential for electronics applications because it allows a material to switch the flow of electrons on and off. One way of introducing a bandgap into graphene is to chemically dope it, but this has to be done carefully so as not to destroy graphene’s unique electronic properties too much.

Plasma-based surface functionalization technique

A team led by Mildred Dresselhaus and Tomas Palacios has now succeeded in p-doping graphene with chlorine using a plasma-based surface functionalization technique. “Compared with other chemical doping methods, the advantages of our approach are very significant,” says team member Xu Zhang. “First and foremost, the chlorine-doped graphene keeps a high charge mobility of around 1500 cm2/Vs after the hole doping. This value is impressively high compared to those obtained with other chemical species previously.”

The chlorine can also cover over 45% of the graphene sample surface, he adds. This is the highest surface coverage area reported for any graphene doping material until now, according to the researchers.

Density functional theory predicts that a bandgap of up to 1.2 eV can be opened up in graphene if both sides of the sample are chlorinated, and if the amount of chlorine on each side covers 50% of the total sample area. “The 45.3% coverage in single-sided chlorinated graphene observed in our work is thus important and paves the way to ultimately opening up a sizeable bandgap in the material while maintaining a reasonably high mobility,” Zhang told nanotechweb.org.

In their work, the researchers studied both “exfoliated” graphene and that obtained using chemical vapour deposition (CVD). They performed the chlorine plasma treatments in an Electron Cyclotron Resonance Reactive Ion Etcher (ECR/RIE) in which chlorine gas was excited into the plasma state by absorbing energy from an in-phase electromagnetic field at a certain frequency. The chlorine plasma was accelerated by applying a DC bias relative to the sample stage. “We carefully optimized both the ECR power and DC bias to control the reaction conditions,” explained Zhang, “and the experiments were performed at room temperature.”

The p-doped material produced could be used to make all-graphene integrated circuits on a chip and RF transistors, he added. Doping the graphene with chlorine also reduces its sheet resistance, making it suitable for use in electronic circuit interconnects.

The team now plans to dope suspended samples of graphene with chlorine – to access both sides of a sample – and so open up an even bigger electronic band gap.

The present work is detailed in ACS Nano DOI: 10.1021/nn4026756.

Rechargeable Flow Batteries: Solution to Cheap Renewable Energy Storage


imagesCAMR5BLR Einstein Judging a FishResearchers at MIT have developed a battery that could bring us reliable and cheap large scale energy storage. Based on flow battery technology, the researchers took out the costly membrane and created a battery that has a power density that is an order of magnitude higher than lithium-ion batteries and three times greater than other membrane-less systems.

 

mit-flow-batteries_jpg_662x0_q100_crop-scale

MIT reports, “The device stores and releases energy in a device that relies on a phenomenon called laminar flow: Two liquids are pumped through a channel, undergoing electrochemical reactions between two electrodes to store or release energy. Under the right conditions, the solutions stream through in parallel, with very little mixing. The flow naturally separates the liquids, without requiring a costly membrane.”

The reactants used are liquid bromine and hydrogen fuel, which is cheap, but also has had issues with breaking down the membrane in other flow batteries. By taking out the membrane they were able to speed up energy storage and extend the life of the battery.

“Here, we have a system where performance is just as good as previous systems, and now we don’t have to worry about issues of the membrane,” says Martin Bazant, a professor of chemical engineering. “This is something that can be a quantum leap in energy-storage technology.”

As we bring more renewable technologies like wind and solar into the grid, affordable and reliable energy storage is increasingly important. While solar and wind energy output varies based on weather conditions, large scale energy storage systems can smooth out the power delivery from those technologies by storing any excess energy when it’s produced and using it when the output is lower or demand is higher.

Energy storage is the key enabling technology for renewables,” says Cullen Buie, an assistant professor of mechanical engineering. “Until you can make [energy storage] reliable and affordable, it doesn’t matter how cheap and efficient you can make wind and solar, because our grid can’t handle the intermittency of those renewable technologies.”

MIT says, “Braff built a prototype of a flow battery with a small channel between two electrodes. Through the channel, the group pumped liquid bromine over a graphite cathode and hydrobromic acid under a porous anode. At the same time, the researchers flowed hydrogen gas across the anode. The resulting reactions between hydrogen and bromine produced energy in the form of free electrons that can be discharged or released.

The researchers were also able to reverse the chemical reaction within the channel to capture electrons and store energy — a first for any membraneless design.”

Now that the team’s experiments have lined up with their computer models, they’re focused on scaling up the technology and seeing how it performs. They predict that the technology will be able to produce energy costing as little as $100/kWh, which would make it the cheapest large scale energy storage system built yet.

Nanosensors Could Aid Drug Manufacturing


nanomanufacturing-2 CAMBRIDGE, Mass. MIT News Office: Chemical engineers find that arrays of carbon nanotubes can detect flaws in drugs and help improve production.      — MIT chemical engineers have discovered that arrays of billions of nanoscale sensors have unique properties that could help pharmaceutical companies produce drugs — especially those based on antibodies — more safely and efficiently.
Using these sensors, the researchers were able to characterize variations in the binding strength of antibody drugs, which hold promise for treating cancer and other diseases. They also used the sensors to monitor the structure of antibody molecules, including whether they contain a chain of sugars that interferes with proper function.
“This could help pharmaceutical companies figure out why certain drug formulations work better than others, and may help improve their effectiveness,” says Michael Strano, an MIT professor of chemical engineering and senior author of a recent paper describing the sensors in the journal ACS Nano.
The team also demonstrated how nanosensor arrays could be used to determine which cells in a population of genetically engineered, drug-producing cells are the most productive or desirable, Strano says. Lead author of the paper is Nigel Reuel, a graduate student in Strano’s lab. The labs of MIT faculty members Krystyn Van Vliet, Christopher Love and Dane Wittrup also contributed, along with scientists from Novartis.
Testing drug strength
Strano and other scientists have previously shown that tiny, nanometer-sized sensors, such as carbon nanotubes, offer a powerful way to detect minute quantities of a substance. Carbon nanotubes are 50,000 times thinner than a human hair, and they can bind to proteins that recognize a specific target molecule. When the target is present, it alters the fluorescent signal produced by the nanotube in a way that scientists can detect.
Some researchers are trying to exploit large arrays of nanosensors, such as carbon nanotubes or semiconducting nanowires, each customized for a different target molecule, to detect many different targets at once. In the new study, Strano and his colleagues wanted to explore unique properties that emerge from large arrays of sensors that all detect the same thing.
The first feature they discovered, through mathematical modeling and experimentation, is that uniform arrays can measure the distribution in binding strength of complex proteins such as antibodies. Antibodies are naturally occurring molecules that play a key role in the body’s ability to recognize and defend against foreign invaders. In recent years, scientists have been developing antibodies to treat disease, particularly cancer. When those antibodies bind to proteins found on cancer cells, they stimulate the body’s own immune system to attack the tumor.
For antibody drugs to be effective, they must strongly bind their target. However, the manufacturing process, which relies on nonhuman, engineered cells, does not always generate consistent, uniformly binding batches of antibodies.

Currently, drug companies use time-consuming and expensive analytical processes to test each batch and make sure it meets the regulatory standards for effectiveness. However, the new MIT sensor could make this process much faster, allowing researchers to not only better monitor and control production, but also to fine-tune the manufacturing process to generate a more consistent product.
“You could use the technology to reject batches, but ideally you’d want to use it in your upstream process development to better define culture conditions, so then you wouldn’t produce spurious lots,” Reuel says.
Measuring weak interactions
Another useful trait of such sensors is their ability to measure very weak binding interactions, which could also help with antibody drug manufacturing.
Antibodies are usually coated with long sugar chains through a process called glycosylation. These sugar chains are necessary for the drugs to be effective, but they are extremely hard to detect because they interact so weakly with other molecules. Drug-manufacturing organisms that synthesize antibodies are also programmed to add sugar chains, but the process is difficult to control and is strongly influenced by the cells’ environmental conditions, including temperature and acidity.
Without the appropriate glycosylation, antibodies delivered to a patient may elicit an unwanted immune response or be destroyed by the body’s cells, making them useless.
“This has been a problem for pharmaceutical companies and researchers alike, trying to measure glycosylated proteins by recognizing the carbohydrate chain,” Strano says. “What a nanosensor array can do is greatly expand the number of opportunities to detect rare binding events. You can measure what you would otherwise not be able to quantify with a single, larger sensor with the same sensitivity.” This tool could help researchers determine the optimal conditions for the correct degree of glycosylation to occur, making it easier to consistently produce effective drugs.
Mapping production
The third property the researchers discovered is the ability to map the production of a molecule of interest. “One of the things you would like to do is find strains of particular organisms that produce the therapeutic that you want,” Strano says. “There are lots of ways of doing this, but none of them are easy.”
The MIT team found that by growing the cells on a surface coated with an array of nanometer-sized sensors, they could detect the location of the most productive cells. In this study, they looked for an antibody produced by engineered human embryonic kidney cells, but the system could also be tailored to other proteins and organisms.
Once the most productive cells are identified, scientists look for genes that distinguish those cells from the less productive ones and engineer a new strain that is highly productive, Strano says.
The researchers have built a briefcase-sized prototype of their sensor that they plan to test with Novartis, which funded the research along with the National Science Foundation.
“Carbon nanotubes coupled to protein-binding entities are interesting for several areas of bio-manufacturing as they offer great potential for online monitoring of product levels and quality. Our collaboration has shown that carbon nanotube-based fluorescent sensors are applicable for such purposes, and I am eager to follow the maturation of this technology,” says Ramon Wahl, an author of the paper and a principal scientist at Novartis.

“Programmable Matter” using Nanocrystals


When University of Pennsylvania nano-scientists created beautiful, tiled patterns with flat nano-crystals, they were left with a mystery: why did some sets of crystals arrange themselves in an alternating, herringbone style, even though it wasn’t the simplest pattern? To find out, they turned to experts in computer simulation at the University of Michigan and the Massachusetts Institute of Technology.

7-pennresearch

These transmission electron microscope images show the two different patterns the nano-crystals could be made to pack in. 

The result gives nanotechnology researchers a new tool for controlling how objects one-millionth the size of a grain of sand arrange themselves into useful materials, it gives a means to discover the rules for “programming” them into desired configurations.

The study was led by Christopher Murray, a professor with appointments in the Department of Chemistry in the School of Arts and Sciences and the Department of Materials Science and Engineering in the School of Engineering and Applied Sciences. Also on the Penn team were Cherie Kagan, a chemistry, MSE and electrical and systems engineering professor, and postdoctoral researchers Xingchen Ye, Jun Chen and Guozhong Xing. 

They collaborated with Sharon Glotzer, a professor of chemical engineering at Michigan, and Ju Li, a professor of nuclear science and engineering at MIT.

Their research was featured on the cover of the journal Nature Chemistry.

“The excitement in this is not in the herringbone pattern,” Murray said, “It’s about the coupling of experiment and modeling and how that approach lets us take on a very hard problem.”

Previous work in Murray’s group has been focused on creating and arranging them into larger crystal . Ultimately, researchers want to modify patches on in different ways to coax them into more complex patterns. The goal is developing “programming matter,” that is, a method for designing based on the properties needed for a particular job.

“By engineering interactions at the nanoscale,” Glotzer said, “we can begin to assemble target structures of great complexity and functionality on the macroscale.”

Glotzer introduced the concept of nanoparticle “patchiness” in 2004. Her group uses computer simulations to understand and design the patches.

Recently, Murray’s team made patterns with flat nanocrystals made of heavy metals, known to chemists as lanthanides, and fluorine atoms. Lanthanides have valuable properties for solar energy and medical imaging, such as the ability to convert between high- and low-energy light.

They started by breaking down chemicals containing atoms of a lanthanide metal and fluorine in a solution, and the lanthanide and fluorine naturally began to form crystals. Also in the mix were chains of carbon and hydrogen that stuck to the sides of the crystals, stopping their growth at sizes around 100 nanometers, or 100 millionths of a millimeter, at the largest dimensions. By using lanthanides with different atomic radii, they could control the top and bottom faces of the hexagonal crystals to be anywhere from much longer than the other four sides to non-existent, resulting in a diamond shape.

To form tiled patterns, the team purified the nano-crystals and mixed them with a solvent. They spread this mixture in a thin layer over a thick fluid, which supported the crystals while allowing them to move. As the solvent evaporated, the crystals had less space available, and they began to pack together.

The diamond shapes and the very long hexagons lined up as expected, the diamonds forming an argyle-style grid and the hexagons matching up their longest edges like a foreshortened honeycomb. The hexagons whose sides were all nearly the same length should have formed a similar squashed honeycomb pattern, but, instead, they lined up in an alternating herringbone style.

“Whenever we see something that isn’t taking the simplest pattern possible, we have to ask why,” Murray said.

They posed the question to Glotzer’s team.

“They’ve been world leaders in understanding how these shapes could work on nanometer scales, and there aren’t many groups that can make the crystals we make,” Murray said. “It seemed natural to bring these strengths together.”

Glotzer and her group built a computer model that could recreate the self-assembly of the same range of shapes that Murray had produced. The simulations showed that if the equilateral hexagons interacted with one another only through their shapes, most of the crystals formed the foreshortened honeycomb pattern, not the herringbone.

“That’s when we said, ‘Okay, there must be something else going on. It’s not just a packing problem,'” Glotzer said. Her team, which included graduate student Andres Millan and research scientist Michael Engel, then began playing with interactions between the edges of the particles. They found that that if the edges that formed the points were stickier than the other two sides, the hexagons would naturally arrange in the herringbone pattern.

The teams suspected that the source of the stickiness was those carbon and hydrogen chains. Perhaps they attached to the point edges more easily, the team members thought. Since experiment doesn’t yet offer a way to measure the number of hydrocarbon chains on the sides of such tiny particles, Murray asked MIT’s Ju Li to calculate how the chains would attach to the edges at a quantum mechanical level.

Li’s group confirmed that, because of the way that the different facets cut across the lattice of the metal and fluorine atoms, more hydrocarbon chains could stick to the four edges that led to points than the remaining two sides. As a result, the particles become patchy.

“Our study shows a way forward making very subtle changes in building block architecture and getting a very profound change in the larger self-assembled pattern,” Glotzer said. “The goal is to have knobs that you can change just a little and get a big change in structure, and this is one of the first papers that shows a way forward for how to do that.”

 

 

 

 

Nanotechnology explained: Nanowires and nanotubes


nanomanufacturing-2Nanowerk News) Nanowires and nanotubes, slender structures that are  only a few billionths of a meter in diameter but many thousands or millions of  times longer, have become hot materials in recent years. They exist in many  forms — made of metals, semiconductors, insulators and organic compounds — and  are being studied for use in electronics, energy conversion, optics and chemical  sensing, among other fields.

id29945
This Scanning Electron Microscope image shows an array of nanowires. (Photo:  Kristian Molhave/Opensource Handbook of Nanoscience and Nanotechnology)

The initial discovery of carbon nanotubes — tiny tubes of pure  carbon, essentially sheets of graphene rolled up unto a cylinder — is generally  credited to a paper published in 1991 by the Japanese physicist Sumio Ijima  (although some forms of carbon nanotubes had been observed earlier). Almost  immediately, there was an explosion of interest in this exotic form of a  commonplace material. Nanowires — solid crystalline fibers, rather than hollow  tubes — gained similar prominence a few years later.
Due to their extreme slenderness, both nanotubes and nanowires  are essentially one-dimensional. “They are quasi-one-dimensional materials,” says MIT associate professor of materials science and engineering Silvija  Gradecak: “Two of their dimensions are on the nanometer scale.” This  one-dimensionality confers distinctive electrical and optical properties.
For one thing, it means that the electrons and photons within  these nanowires experience “quantum confinement effects,” Gradecak says. And  yet, unlike other materials that produce such quantum effects, such as quantum  dots, nanowires’ length makes it possible for them to connect with other  macroscopic devices and the outside world.
The structure of a nanowire is so simple that there’s no room  for defects, and electrons pass through unimpeded, Gradecak explains. This  sidesteps a major problem with typical crystalline semiconductors, such as those  made from a wafer of silicon: There are always defects in those structures, and  those defects interfere with the passage of electrons.
Made of a variety of materials, nanowires can be “grown” on many  different substrates through a vapor deposition process. Tiny beads of molten  gold or other metals are deposited on a surface; the nanowire material, in  vapor, is then absorbed by the molten gold, ultimately growing from the bottom  of that bead as a skinny column of the material. By selecting the size of the  metal bead, it is possible to precisely control the size of the resulting  nanowire.
In addition, materials that don’t ordinarily mix easily can be  grown together in nanowire form. For example, layers of silicon and germanium,  two widely used semiconductors, “are very difficult to grow together in thin  films,” Gradecak says. “But in nanowires, they can be grown without any  problems.” Moreover, the equipment needed for this kind of vapor deposition is  widely used in the semiconductor industry, and can easily be adapted for the  production of nanowires.
While nanowires’ and nanotubes’ diameters are negligible, their  length can extend for hundreds of micrometers, even reaching lengths visible to  the unaided eye. No other known material can produce such extreme  length-to-diameter ratios: millions of times longer than they are wide.
Because of this, the wires have an extremely high ratio of  surface area to volume. That makes them very good as detectors, because all that  surface area can be treated to bind with specific chemical or biological  molecules. The electrical signal generated by that binding can then easily be  transmitted along the wire.
Similarly, nanowires’ shape can be used to produce narrow-beam  lasers or light-emitting diodes (LEDs), Gradecak says. These tiny light sources  might someday find applications within photonic chips, for example — chips in  which information is carried by light, instead of the electric charges that  relay information in today’s electronics.
Compared to solid nanowires, nanotubes have a more complex  structure: essentially one-atom-thick sheets of pure carbon, with the atoms  arranged in a pattern that resembles chicken wire. They behave in many ways as  one-dimensional materials, but are actually hollow tubes, like a long,  nanometer-scale drinking straw.
The properties of carbon nanotubes can vary greatly depending on  how they are rolled up, a property called chirality. (It’s similar to the  difference between forming a paper tube by rolling a sheet of paper lengthwise  versus on the diagonal: The different alignments of fibers in the paper produce  different strength in the resulting tubes.) In the case of carbon nanotubes,  chirality can determine whether the tubes behave as metals or as semiconductors.
But unlike the precise manufacturing control that is possible  with nanowires, so far methods for making nanotubes produce a random mix of  types, which must be sorted to make use of one particular kind. Besides  single-walled nanotubes, they also exist in double-walled and multi-walled  forms.
In addition to their useful electronic and optical properties,  carbon nanotubes are exceptionally strong, and are used as reinforcing fibers in  advanced composite materials. “In any application where one-dimensionality is  important, both carbon nanotubes and nanowires would provide benefits,” Gradecak  says.
  Source: By David L. Chandler,  MIT

Read more: http://www.nanowerk.com/news2/newsid=29945.php#ixzz2QGrCx84G

Read more: http://www.nanowerk.com/news2/newsid=29945.php#ixzz2QGr3jbwu

 

 

 

 

 

 

 

 

 

 

 

 

Read more: http://www.nanowerk.com/news2/newsid=29945.php#ixzz2QGqH8gFt