A New Entry into the Mass Production of Quantum Dots

QDOTS imagesCAKXSY1K 8  Quantum Technology Group  



Quantum Technology Group (QTG) owns the patent rights for a unique non-toxic ZnSe nanoparticle (quantum dot – QD) and a manufacturing process that specifically addresses industry demands. QTG has recently entered the marketplace as a bulk chemical manufacturer with sales of QD’s to industry.

Laboratory manufacturing has been established by collaboration with an existing chemical synthesis firm with decades of experience. QTG offers a stable platform technology for product development ventures and eliminates the need for QD sourcing and the licensing requirements or possible infringements of combined technologies and patents.

The Quantum Technology Group’s patent portfolio establishes broad implications for utilization by a wide variety of industries. Our technology represents university based research patents that now are exclusively licensed to QTG.The products QTG produces are in critical demand by industry and represent the platform or core technology they require. Specific quality characteristics have been identified and our products meet and exceed industry requirements. Furthermore, our capabilities produce a nontoxic alternative for current manufacturers. We currently meet RoHS (Restrictions of Hazardous Substances) standard Sand Green Leaf Certifications.   Our unique manufacturing process also presents the ability to produce nontoxic quantum dots and deliver bulk manufacturing.

Specific industries demand large quantity production, QTG can meet these needs.   Our scientific competencies also extend into product development and collaborations with industry. QTG welcomes inquiries and offers it’s considerable resources and those of the well established commercial and academic scientific community. QTG is located in the center of the technology corridor north of Boston Massachusetts.

Our unique manufacturing process also presents the ability to produce nontoxic quantum dots and deliver bulk manufacturing. Specific industries demand large quantity production, QTG can meet these needs.   Our scientific competencies also extend into product development and collaborations with industry. QTG welcomes inquiries and offers it’s considerable resources and those of the well established commercial and academic scientific community. QTG is located in the center of the technology corridor north of Boston Massachusetts.

The Quantum Technology Group team consists of a highly competent staff comprised of individuals from science and business. QTG employs university based inventors preeminent in the fields of chemical engineering and nanotechnology. QTG has also partnered with a well known firm highly proficient in the field of chemical synthesis, their expertise spans decades of successful chemical manufacturing and scale up. The combined resources now represent a team focused on producing quantum dots, specific surface modifications and bulk manufacturing.

The QTG business organization has engaged individuals with decades of experience involving complex product commercialization, particularly involving international markets. The QTG business team has targeted international and domestic opportunities and remains dedicated to understanding and meeting the demands of specific industries.

DANIEL FORTE – FOUNDER – Marketing and business development professional. Appointed by the United States Secretary of Commerce to the Export Trade Council.


JON KREMSKY PhD DIRECTOR OF MANUFACTURING – Chemical synthesis and industrial scale-up expert. Former Director Process Chemistry, Millipore

JAMES MCKEARIN PhD – PRINCIPAL SCIENTIST – Group leader chemical synthesis process engineering expert.

PERRY CATCHINGS – RESEARCH AND DEVELOPMENT MANAGER – Former senior management Polaroid, transfer of technology from R&D, Development and

Chemical Manufacturing

ULF DUNBERGER – PRODUCT INNOVATION – Expert automation and conceptual design business processes engineer focused on the validation of operational effectiveness for corporate control environments.

JUN WANG PhD – DIRECTOR RESEARCH AND DEVELOPMENT – Co-Inventor – Research Professor University of Massachusetts

LAKIS MOUNTZIARIS PhD – Inventor – Chairman Chemical Engineering University of Massachusetts.


Key issues concerning commercial product development involve:

• Rapid manufacturing of QD with specified product qualities

• Bulk manufacturing capability (kilo quantity)

• Costs of goods – manufacturing

• Toxicity

• Desired stable emission frequencies

• Quantum absorption

• Quantum emission

• Surface modifications

• Key patents based on the QTG platform

The QD’s contained within the QTG patent portfolio address the above noted key issues. Capabilities offer a stable platform for product development ventures.

Rice and Sandia National Labs Discover Unique NanoTube Photodetector


Project with Sandia National Laboratories leads to promising optoelectronic device

HOUSTON – (Feb. 27, 2013) – Researchers at Rice University and Sandia National Laboratories have made a nanotube-based photodetector that gathers light in and beyond visible wavelengths. It promises to make possible a unique set of optoelectronic devices, solar cells and perhaps even specialized cameras.

A traditional camera is a light detector that captures a record, in chemicals, of what it sees. Modern digital cameras replaced film with semiconductor-based detectors.

But the Rice detector, the focus of a paper that appeared today in the online Nature journal Scientific Reports, is based on extra-long carbon nanotubes. At 300 micrometers, the nanotubes are still only about 100th of an inch long, but each tube is thousands of times longer than it is wide.

That boots the broadband detector into what Rice physicist Junichiro Kono considers a macroscopic device, easily attached to electrodes for testing. The nanotubes are grown as a very thin “carpet” by the lab of Rice chemist Robert Hauge and pressed horizontally to turn them into a thin sheet of hundreds of thousands of well-aligned tubes.

They’re all the same length, Kono said, but the nanotubes have different widths and are a mix of conductors and semiconductors, each of which is sensitive to different wavelengths of light. “Earlier devices were either a single nanotube, which are sensitive to only limited wavelengths,” he said. “Or they were random networks of nanotubes that worked, but it was very difficult to understand why.”

“Our device combines the two techniques,” said Sébastien Nanot, a former postdoctoral researcher in Kono’s group and first author of the paper. “It’s simple in the sense that each nanotube is connected to both electrodes, like in the single-nanotube experiments. But we have many nanotubes, which gives us the quality of a macroscopic device.”

With so many nanotubes of so many types, the array can detect light from the infrared (IR) to the ultraviolet, and all the visible wavelengths in between. That it can absorb light across the spectrum should make the detector of great interest for solar energy, and its IR capabilities may make it suitable for military imaging applications, Kono said. “In the visible range, there are many good detectors already,” he said. “But in the IR, only low-temperature detectors exist and they are not convenient for military purposes. Our detector works at room temperature and doesn’t need to operate in a special vacuum.”

The detector is also sensitive to polarized light and absorbs light that hits it parallel to the nanotubes, but not if the device is turned 90 degrees.

The work is the first successful outcome of a collaboration between Rice and Sandia under Sandia’s National Institute for Nano Engineering program funded by the Department of Energy. François Léonard’s group at Sandia developed a novel theoretical model that correctly and quantitatively explained all characteristics of the nanotube photodetector. “Understanding the fundamental principles that govern these photodetectors is important to optimize their design and performance,” Léonard said.

Kono expects many more papers to spring from the collaboration. The initial device, according to Léonard, merely demonstrates the potential for nanotube photodetectors. They plan to build new configurations that extend their range to the terahertz and to test their abilities as imaging devices. “There is potential here to make real and useful devices from this fundamental research,” Kono said.

Co-authors are Aron Cummings, a postdoctoral fellow in Léonard’s Nanoelectronics and Nanophotonics Group at Sandia; Rice alumnus Cary Pint, an assistant professor of mechanical engineering at Vanderbilt University; Kazuhisa Sueoka, a professor at Hokkaido University; and Akira Ikeuchi and Takafumi Akiho, Hokkaido University graduate students who worked in Kono’s lab as part of Rice’s NanoJapan program. Hauge is a distinguished faculty fellow in chemistry. Kono is a professor of electrical and computer engineering and of physics and astronomy.

The U.S. Department of Energy, the National Institute for Nano Engineering at Sandia National Laboratories, the Lockheed Martin Advanced Nanotechnology Center of Excellence at Rice University, the National Science Foundation and the Robert A. Welch Foundation supported the research.

Solar Energy for Saudi Just makes $ense

Wed Feb 27, 2013 6:54am EST By Gerard Wynn

QDOTS imagesCAKXSY1K 8LONDON Feb 27 (Reuters) – Saudi Arabia has the world’s second best solar resource after Chile’s Atacama Desert, making investment in solar a no-brainer as an alternative to burning its most precious resource.

The Kingdom has for several years been talking up its plans to become a major player in solar power.

Four years ago a senior oil ministry official told Reuters: “We can export solar power to our neighbours on a very large scale and that is our strategic objective to diversify our economy. It will be huge.”

Since then the country has installed about 10 megawatts, a tiny fraction of cloudy England.

But the country has now detailed plans for installed renewable power capacity in 2020 and 2032 which could put the country among the world’s top five solar power producers.

The competitiveness of solar photovoltaic (PV) power depends on the installed cost (including the price of solar modules and installation costs); local solar irradiation; and the cost of the alternative, as illustrated by the retail power price plus subsidies.

NASA solar irradiation data show that parts of Saudi Arabia are second only to the world’s driest desert, in Chile.

Solar module demand would be boosted by a similar shift in other sunny, emerging economies with subsidised fossil fuel power.


Saudi Arabia is dependent on electricity both for energy and water through desalination.

The main source of electricity is burning crude oil and increasingly, natural gas.

The country burned some 192.8 million barrels of crude to generate 129 million megawatt hours (MWh) of power in 2010, Saudi and International Energy Agency data show.

Saudi power generators pay about $4 per barrel for their oil, industry data show.

That works out at a running cost of $0.006 per kilowatt hours (kWh) in 2010, excluding all other capital, fixed and operating costs.

But accounting for the opportunity cost of exporting crude oil at international prices of $113 per barrel raises the economic cost of oil-fired power generation to $0.13 per kWh, ignoring all non-fuel costs.

A simplified solar cost calculator developed by the U.S. Department of Energy‘s National Renewable Energy Laboratory (NREL) estimates the cost of solar power at $0.07 per kWh under Saudi conditions.

That assumes a capacity factor of 33 percent as can be expected in sunnier locations in southern Saudi Arabia and a full capital cost of $1.5 per watt, a conservative estimate for utility-scale installations.

That is before taking into account the annual degradation of solar modules, and losses as result of dust, sand and high temperatures, none of which are deal-breakers.

The NREL calculator also appears to ignore DC to AC conversion losses which can cut power output by about 25 percent compared with nameplate DC capacity.



NREL has helped develop an open access database measuring solar irradiance, with funding from the U.S. Department of Energy and sourced from NASA.

It is part of a Solar and Wind Energy Resource Assessment (SWERA) initiative started in 2001 with U.N. funding to advance the large-scale use of renewable energy technologies.

The data is measured at one-degree resolution globally averaged from 1983-2005 and calculated according to latitude and local weather.

Solar irradiance is calculated according to various formats, for example a flat surface laid horizontal to the Earth (“Global Horizontal Irradiance”), or tilted due south at the angle of local latitude (“Solar Tilt”), or tilted southwards and also tracking the sun (“Direct Normal Irradiance”, or DNI).

The data reinforce how Germany is not the most obvious place for the world’s leading solar market.

The sunniest region of southern Germany has a DNI of 3.39 kWh per square metre per day. (See Chart 1)

Saudi Arabia’s capital, Riyadh, has a DNI of 6.68 kWh, and the vast empty land south of the city is as sunny as 7.99 kWh.

The country’s Red Sea coastline north of the second biggest city Jeddah rises as high as 8.60 kWh.

That appears to be the second sunniest place on Earth, only over-shadowed by Chile’s Atacama desert which has a DNI of up to 9.77 kWh per square metre per day.



Local solar radiation determines how much power a given solar module will generate.

Capacity factor is a term which compares the electricity that a solar module actually generates compared with the theoretical maximum if it were running at full capacity all the time.

The standard test conditions (STC) for assigning the nameplate capacity of solar panels assume irradiance of 1,000 watts per square metre, or 24 kWh per square metre over 24 hours, at an ambient temperature of 25 degrees Celsius.

Such assumptions can be applied to actual field conditions recorded by the NASA data to calculate a capacity factor.

A solar panel located south of Riyadh, for example, would have a capacity factor of about 33 percent, given a local solar irradiance of 8 kWh, compared with test conditions of 24 kWh per day.


There are further real-world losses associated with solar power.

In Saudi Arabia, high temperatures are relevant, where power output falls by about 0.5 percent per degree Celsius above 25 degrees, according to NREL assumptions, probably not enough to undermine its competitiveness.

Other emerging economies have rapidly growing power demand and subsidised fossil fuel consumption including China and India.

The NASA data show that both these countries have locations where solar irradiation rivals Riyadh.

Unsubsidised solar power can replace fossil fuels at scale in such locations over the next decade at zero or negative cost, with implications both for solar module and fossil fuel demand.

To See NREL Solar Chart, Go Here: http://maps.nrel.gov/swera?visible=swera_dni_nasa_lo_res&opacity=50&extent=-5.13,41.36,9.56,51.09

Nanotechnology’s Revolutionary Next Phase: Eric Drexler on “APM”

QDOTS imagesCAKXSY1K 8The term “nanotechnology” has been bandied about so much over the last few decades that even the researcher who popularized the term is the first to point out that it’s lost its original meaning. Nanotech, or the manipulation of matter on atomic and molecular scales, is currently used to describe micro-scale technology in everything from space technology to biotech.


As such, nanotech has already changed the world. But the fruition of atomically precise manufacturing (APM) — nanotech’s next phase — promises to create such “radical abundance” that it will not only change industry but civilization itself.

At least that’s the view of Eric Drexler, considered by most to be the father of nanotechnology. An American engineer, technologist and author with three degrees from M.I.T., Drexler is currently at the “Programme on the Impacts of Future Technology” at Oxford University in the U.K.

Forbes.com questioned Drexler about points discussed in his forthcoming book, Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, due out in May.

Has nanotechnology, as most of the world currently understands it, been over-hyped? At the outset, “nanotechnology” essentially meant atomically precise manufacturing (APM). But by the time something called nanotechnology won large-scale funding a decade ago, the term sometimes meant APM, and sometimes meant something more like conventional materials science. But expecting to get APM-level technologies out of typical areas of materials science is like expecting to get a Swiss watch out of a cement mixer. [APM] progress has been in the molecular sciences. People looking to materials science for progress in APM have been setting themselves up to be blindsided, because some of the most important boostrapping technologies for APM are not labeled “nanotechnology.”

In “Radical Abundance,” you note that APM-level production technology will allow a box on a desktop “to manufacture an infinite range of products drawn from a digital library.” This almost sounds like magic. How would the atoms be arranged and manipulated to facilitate the manufacturing process?

An ordinary printer shows how digital information can be used to arrange small things — pixels — to make a virtually infinite range of images. By doing something similar with small bits of matter, and APM-level technologies can fabricate a virtually infinite range of products. 3D printing also illustrates this principle.

Carbon NanotubeImagine factory machinery putting small components together to make larger components and you have a good idea of how APM-based production can work. Down at the bottom, the parts are simple molecules from ordinary commercial materials in a can or a drum, somewhat like large ink cartridges. Simple molecules are atomically precise, so they make a good starting point for atomically precise manufacturing. This works if the factory machines themselves are atomically precise and guide molecular motions accurately enough, and physics shows that nanoscale machines can, in fact, do this.

Factories that use very small machines can be very compact, just a few times larger than what they produce. A desktop-scale machine could manufacture a tablet computer or a roll of solar photovoltaic cells.

What about the cost-effectiveness of APM? Cost-effectiveness depends on both production cost and product value. APM products can have very high performance and value because atomically precise materials based on carbon nanotubes can be extremely strong and lightweight, because atomically precise computer devices can far outperform today’s nanoscale electronics, and so on through a range of other examples.

Production costs can be low because the raw materials are inexpensive and the processing can go straight from raw materials to final products using highly productive machinery. The key insight here is that nanoscale mechanical devices can move and act almost exactly like larger machines, but moving at much higher frequencies. This is a consequence of physical scaling laws of the kind that [physicist] Richard Feynman described almost 50 years ago, and it enables high throughput. So the prospect is a technology that combines high performance with low cost, typically by large factors.

To be an exploratory engineer means applying conservative engineering principles — margins of safety, redundant options, and so on — and design analysis based on well-established, textbook-quality scientific knowledge. This is the only way to draw reliable conclusions about what can be accomplished.

The place to look for new and surprising results is in the range of technologies that are beyond reach of current fabrication technologies. APM-level technologies are in this range. We can see paths forward toward these technologies — using today’s molecular tools to step by step build better tools. But a clear view isn’t the same as a short path. APM-level technologies are not around the corner.

Would APM make revolutionary inroads into biotech — specifically, in developing nano-machines that could unclog arteries; reverse brain damage in stroke victims; or even manufacture a truly robust artificial heart? APM is very different from biotechnology (think of the difference between a car and a horse). But we already see nanoscale atomically precise devices being used to read and synthesize DNA, devices borrowed from biological molecular machinery. Nanoscale atomically precise technologies like these can be made much faster and more efficient. Nanomedicine is already researching nanoscale functional particles that can circulate in the body and target cancer cells. Technologies of this kind have enormous room for improvement, and advances in atomically precise fabrication will be the key. The body relies on atomically precise devices to do its work, and atomically precise devices are the best way to accomplish precise medical interventions at the molecular level.

Would APM lower the cost of access to outer space? The main barrier to space activity today is cost. With the ability to make materials tens of times stronger and lighter than aluminum, and at a low cost per kilogram, access to space becomes far more practical. The difficulties of producing high-performance, low-defect, high-reliability systems also decline sharply with atomically precise manufacturing.

In what fields would APM cause the most pronounced economic disruption and the collapse of global supply chains to more local chains? The digital revolution had far-reaching effects on information industries. APM-based production promises to have similarly far-reaching effects, but transposed into the world of physical products. In thinking about implications for international trade and economic organization, three aspects should be kept in mind: a shift from scarce to common raw materials, a shift from long supply chains to more direct paths from raw materials to finished products, and a shift toward flexible, localized manufacturing based on production systems with capabilities that are comparable on-demand printing. This is enough to at least suggest the scope of the changes to expect from a mature form of APM-based production — which again is a clear prospect but emphatically not around the corner.

Would APM help make war obsolete? I don’t see that anything will make war obsolete, but the prospect of APM-level technologies changes national interests in two major ways:

By deeply reducing the demand for scarce resources — including petroleum — APM technologies will reduce the motivations for geopolitical struggles for what are now considered strategic resources.

Secondly, by making calculations of future military power radically uncertain, the prospect of these technologies gives good reason to examine approaches to cooperative development merged with confidence-building mutual transparency among major powers. Changes in national interests will call for developing [military] contingency plans premised on the emergence of these technologies.

A Steel worker is pictured as he works with mo...Atomically Precise Manufacturing would make steel works such as this one obsolete. (Image Credit: AFP/Getty Images via @daylife)

When will we actually see the onset of the APM revolution? The paths forward require further advances in atomically precise fabrication, an area that began with organic chemistry more than a century ago and continues to make great strides. A sharper engineering focus will bring faster progress and further rewards, just as progress in atomically precise fabrication has brought rewards since the beginning in science, industry, and medicine.

Although advanced objectives like full-scale APM stand beyond a normal business R&D investment horizon, incremental steps in key technologies are steadily emerging. But we need a more focused program of design, analysis, research, and development.

Do all roads lead to APM? Thus, is some form of APM likely to be ubiquitous among intelligent civilizations in the galaxy, if of course such civilizations exist? There’s no substitute for atomic precision because there’s no substitute for precisely controlling the structure of matter. The only known way to do this is by guiding the motion of molecules to put them in place, according to plan, by means of directed bonding — in other words, by some form of atomically precise manufacturing. Since there are many ways to develop these technologies, I’d say that all roads forward do indeed lead to APM.


Nanoparticles Split Water, Power Fuel Cell

QDOTS imagesCAKXSY1K 8Si Nanoparticles Split Water, Power Fuel Cell

by Tim Palucka

Materials Research Society | Published: 29 January 2013

Generating electricity in the field to power a laptop or night vision goggles could someday be just as simple as adding water to a cartridge containing silicon nanoparticles and a base. Researchers at the University at Buffalo (SUNY) have demonstrated that nanoparticles of Si in a basic solution can split water to release hydrogen and power a portable fuel cell to produce electricity. The ability to split water on-demand without adding heat, light, or electricity to the system could be a significant advance in fuel cell technology.

TEM 10 nm SI - Hydrogen“The reaction rate with these very small 10-nm Si particles is so much faster than with the relatively large 100 nm Si particles,” says Mark Swihart, whose team published their results in a recent issue of ACS Nano Letters. “Because of this fast reaction rate and the fact that there’s no delay between when you add water and when the reaction starts, it makes the technology at least practical in terms of being able to power a device instantaneously.”


While there was some scant evidence in the scientific literature that Si could perform this feat of splitting water to release hydrogen, it was largely ignored because the reaction rate was so slow as to be uninteresting. Using Al, Zn, or metal hydrides for this purpose looked so much more promising that Si fell by the wayside.

But Swihart and his group have been working with Si nanoparticles for more than a decade, mostly in the realm of quantum dot research. In doing so, they frequently had to use a base such as hydrazine for etching, and they noticed that hydrogen was released when aqueous hydrazine reacted with Si. Investigation showed that the hydrogen came not from decomposition of hydrazine, but from the oxidation of Si to release hydrogen from water.

Further investigation of the reaction using Si particles of different sizes, focusing on 10-nm and 100-nm-diameter particles with aqueous KOH, showed a particle size dependent liberation of hydrogen from water. But the factor of 150 increase in the reaction rate for the 10-nm-diameter particles compared to the 100-nm-particles was well in excess of the factor of 6 difference in their specific surface area. Thus, the increase in rate is much greater than expected based on increased surface area alone.

Swihart believes the difference is caused by geometry, not surface area. The 111 lattice planes etch much more slowly than other planes of Si, so crystals terminated entirely by 111 planes react slowly.  “The 10 nm particles etch isotropically—they just get smaller and go away,” he says. There’s no time for faceting to occur in this case. But the 100 nm particles undergo anisotropic etching. The faster-reacting 100 and 110 planes etch away first, leaving a particle with slower-reacting 111 planes behind in what he describes as a “hollow nano-balloon structure.” “With the bigger particles,” Swihart says, “eventually the unreactive 111 surfaces are the ones that end up being left,” thus slowing the reaction rate.

As a proof-of-concept, the research team tested a small fuel cell with a 20 stack polymer electrolyte membrane, comparing the fuel cell’s power output when fed hydrogen from the Si nanoparticle reaction versus hydrogen from a gas cylinder. Stoichiometrically, two moles of H2 should be generated for one mole of Si. In the tests, the fuel cell powered by H2 generated by reaction with Si produced more current and voltage than when the fuel cell was fed a stoichiometric amount of H2 from a gas cylinder. The difference is due to additional hydrogen, beyond the stoichiometric reaction amount, that terminates the Si surfaces after fabrication of the nanoparticles.

While there is much more work to be done, Swihart believes that if this technology is ever to become practical as a portable electricity generator, the KOH (or other base) would have to be mixed in with the Si in a cartridge, so you would not have to carry around a bottle of KOH solution. Such a device would come with the instructions “just add water.” For a soldier in the field needing to power night vision goggles, water from a nearby stream could be all he needs.


Read the abstract in ACS Nano Letters  here.

Nano-rod solar cell generates hydrogen

QDOTS imagesCAKXSY1K 8A new type of solar collector that uses gold nano-rods could convert sunlight into energy without many of the problems associated with traditional photovoltaic solar cells.

24 February 2013 Will Parker


The developers of the new technique, from the University of California – Santa Barbara, say it is “the first radically new and potentially workable alternative to semiconductor-based photovoltaic devices to be developed in the past 70 years.” They provide details of the new solar hydrogen generator in the journal Nature Nanotechnology.

In conventional photovoltaic cells, sunlight hits the surface of semiconductor material, one side of which is electron-rich, while the other side is not. The photon excites the electrons, causing them to leave their positions, and create positively-charged “holes.” The result is a current of charged particles – electricity.

In the new technique, it is not semiconductor materials that provide the electrons and venue for the conversion of solar energy, but a “forest” of gold nano-rods operating in water. Specifically, gold nano-rods capped with a layer of crystalline titanium dioxide and platinum, and a cobalt-based oxidation catalyst deposited on the lower portion of the array.

“When nanostructures, such as nano-rods, of certain metals are exposed to visible light, the conduction electrons of the metal can be caused to oscillate collectively, absorbing a great deal of the light,” explained Martin Moskovits (pictured front center), a professor of chemistry at UCSB. “This excitation is called a surface plasmon.”

As the “hot” electrons in these plasmonic waves are excited by light particles, some travel up the nano-rod, through a filter layer of crystalline titanium dioxide, and are captured by platinum particles. This causes the reaction that splits hydrogen ions from the bond that forms water. Meanwhile, the holes left behind by the excited electrons head toward the cobalt-based catalyst on the lower part of the rod to form oxygen.

The researchers say that hydrogen production was clearly observable after about two hours. Importantly, the nano-rods were not subject to the photo-corrosion that often causes traditional semiconductor materials to fail and Moskovits says the device operated with no hint of failure for “many weeks.”

Though still in its infancy, the research promises a more robust method of converting sunlight into energy. “Despite the recentness of the discovery, we have already attained ‘respectable’ efficiencies. More importantly, we can imagine achievable strategies for improving the efficiencies radically,” Moskovits said.


Related: Discuss this article in our forum Stanford announces peel-and-stick solar panels Solar steam generator outshines photovoltaic solar cells Solar power’s dirty secret: skyrocketing lead pollution Much simpler catalyst could fast-track hydrogen economy

See Summary of article here: http://www.nature.com/nnano/journal/vaop/ncurrent/full/nnano.2013.18.html

Source: University of California – Santa Barbara

The $1 Trillion Choice

Posted February 22, 2013 By Mark Green

While the White House talks again about raising taxes on oil and natural gas companies, let’s look at a chart that captures the starkly different outcomes – in terms of revenue for government – from two policy paths: higher energy taxes vs. increased energy development:1Trillion_Government_Revenue_Potential_v2_(3)

You read it right: The difference between the two policy choices, in cumulative dollars for government from now until 2030, is more than $1 trillion.

According to a 2011 study by Wood Mackenzie, increased oil and natural gas activity under pro-access policies would generate an additional $800 billion in cumulative revenue for government by 2030. The chart puts into perspective the size of these accumulating revenues – enough to fund entire federal departments at various points along the timeline. By contrast, Wood Mackenize also found that hiking taxes on oil and natural gas companies would, by 2030, result in $223 billion in cumulative lost revenue to government.

Another way to look at it: The chart below shows that the higher-taxes policy path would add about $16 billion in cumulative revenue for government at first, but that sharp revenue losses would follow as increased taxes slow energy development (costing about 22,000 jobs in the process).


The choice is a no-brainer. Yet some in Washington continue to push for the higher taxes path – the less-energy, fewer-jobs, less-revenue-for-government path. White House Press Secretary Jay Carney this week:

“If we have one fundamental goal here in Washington, it should be to work towards growing the economy and increasing job creation, not doing unnecessary, arbitrary things to halt or reverse that process.”

Carney’s right. We need policies that help the economy. Yet, working against the economy and job creation is the likely result from the course the administration keeps pushing: discriminatory tax increases on our industry – one that already contributes an average of $86 million a day to the federal government in income taxes, bonus bids, rental payments, royalties and other fees. API Executive Vice President Marty Durbin, in a recent conference call with reporters:

“When it comes to taxes, singling out our industry for tax increases is bad economic policy, it’s bad tax policy and it punishes one of the few industries that has created jobs and grown our economy throughout the economic downturn. We pay more than our fair share, and despite repeated allegations, we receive no subsidies. We pay federal taxes at an effective rate – 44 percent – that is well above the 29 percent effective rate paid by other S&P Industrials.”

As Durbin noted, higher taxes would impact the significant stimulus our industry provides to the broader economy – $545 billion in 2012. That figure represents jobs, investments in facilities and operations and energy development that generates millions in revenue for government – stimulus that doesn’t require legislation from Congress or a new federal program. Here’s how industry’s investments, measured in capital spending, stacked up from 2006-2011:



“Short-sighted, punitive tax proposals could put at risk those investments, diminishing what we can do for economic growth, sacrificing potential jobs and, paradoxically, sacrificing revenues that could come from new development and new jobs.”

If the goal is more revenue for government from the oil and natural gas industry, there are two paths: One that produces a sizeable net loss over the next two decades – as well as job losses and less energy – or one that, through increased energy development, generates hundreds of billions of additional dollars for government while adding jobs, growing the economy and producing more of the energy our country needs: the $1 trillion choice.


Saudi Arabia launches massive solar power procurement program

QDOTS imagesCAKXSY1K 8Saudi Arabia launches massive solar power procurement program



(Nanowerk News) Saudi Arabia’s King Abdullah City for  Atomic and Renewable Energy (K.A.CARE) has issued its long-awaited White Paper  paving the way towards the deployment of 54 gigawatts of solar power projects by  2032 worth over $60 billion. K.A.CARE has announced the launch of its Renewable Energy  Competitive Procurement Portal and released a White Paper outlining how this vast procurement  process will unfold.

KSA RE Graph
Long-term renewable energy targets for Saudi Arabia. (Source: K.A.CARE)

This announcement marks the launch of a registration process for  interested companies to submit feedback and obtain important information in  connection with the Renewable Energy Program. Crucially, it paves the way  towards the launch of the introductory procurement round.
The introductory procurement round will consist of five to seven  projects with a combined capacity of up to 800 megawatts. The introductory round  is part of Saudi Arabia’s a colossal program to procure 41,000 gigawatts of  solar power facilities by 2032.
“This is a very important milestone, both for Saudi Arabia and  the Middle East solar market as a whole. ESIA will continue to work closely with  KA-CARE to make sure this program becomes a resounding success and a benchmark  for excellence.” said Vahid Fotuhi, President of ESIA.
Source: Emirates Solar Industry Association  (ESIA)

Read more: http://www.nanowerk.com/news2/green/newsid=29203.php#ixzz2Le0Bb55x

Desalination + Solar Energy + Nanotechnology = Potable H2O

QDOTS imagesCAKXSY1K 8Note to Readers: A great update article on “Mid to Mega Desalination Water Projects throughout the world. One thing these projects will all need is plentiful, inexpensive (comparatively) ENERGY! We have been following “3rd Generation Solar Cells” (QD Solar Cells) from the perspective of cheap, abundant energy conversion capability AND the ability to mass produce the eventual solar delivery module.
Add to this, as one of our long time “science-guru-visionaries” shares with us, “the ability to accelerate and make more cost effective, the ‘filtration process’ by using nanotechnolgy in the desalination process, in addition to the energy supply component … that is what they call in baseball “a double play!”                          – Cheers! – BWH


Larger scale desalination plants are coming online as a result of increased water scarcity and lower cost membranes. With the largest facility in the world – 1,025,000 m3/day – set for operation in 2014, what challenges and benefits come with such large scale infrastructure? Corrado Sommariva reports.

There is a lot of emphasis today on mega ton water systems. This trend is clearly driven by the gigantic development of cities. The number of cities that will exceed five million residents in 2015 is going to be in the order of magnitude of 40-50. Clearly, in such an environment, water shortages and the pollution of water resources become serious problems.

Taking into consideration that the majority of these cities are located close to the sea and in arid areas, desalination plants assume extreme importance and are, in fact, critical strategic assets necessary for the community to sustain and support life.

Although large plants may offer opportunities in terms of economy of scale, there is a question as to whether the mega solution is automatically the best option. This is especially true in the case of Seawater Reverse Osmosis (SWRO) plants.


At its time of opening, the Hadera SWRO facility is one of the largest membrane plants in operation, with a capacity of 456,000 m3/day

While large unit size is typically an advantage for thermal desalination, the modularity of the SWRO technology makes it practically insensitive from the cost standpoint if the plant capacity is greater than 100,000 m3/day. This feature allows two advantages. The first benefit is that the plant can be designed for installation in more decentralized locations closer to the utility point, therefore decreasing the associated transmission and distribution costs and the energy footprint.

A second advantage is that the risk profile of a medium- to large-sized project is clearly more moderate than the risk profile of a mega ton project. This becomes an important factor in addressing the difficulty in securing the necessary level of investment associated with such developments in the current and near-term financial climate.

The particular advantage of SWRO is that it allows modularity of installation, whereby an infrastructure designed for long-term expansion could be enlarged gradually and according to the demand for

additional capacity. In this scenario, limited recourse or non recourse loans could be easily made available for small- to mid-size plants. This type of investment presents a much more moderate risk profile and could be revisited for expansion or reduction in size in accordance with demand development. However, producing this large amount of water imposes again, and with a much higher emphasis, the necessity to meet the dual objectives of a sustainable management of water resources and high competitiveness


Membranes stack up: With a capacity of 624,000 m3/day, IDE Technologies’ Soreq plant in Israel will be one of the largest desalination plants in the world

Putting this into context, if we think about the large numbers, the operation of a mega SWRO desalination plant (for example, capacity of 1,000,000 m3/day) would involve a quantity equivalent to three to four million m3/day of seawater abstracted from the sea, and two to three million m3/day of brine disposal discharging back to the sea. With today’s state-of-the-art technologies, the energy required to power this plant would be equivalent to 200 MW.

Things change dramatically if we think about a similar size plant driven by thermal technology. This would involve a quantity equivalent to eight to 10 million m3/day of seawater abstracted from the sea and seven to nine million m3/day of brine disposal with a thermal discharge. In this case, the energy required to drive this plant would be equivalent to 1,500 MW.

However, the majority of this energy – 1,000 to 1300 MW – would be discharged as low grade heat into the sea.

Both scenarios pose dramatic questions on the environmental load related to mega ton water projects and energy conservation.

If addressing the environmental aspect is a clear goal, today’s technology would not enable the installation of such facilities for less than US$1 to 2 billion per mega project, with estimated annual running costs of not less US$200,000,000 per plant. This is not an easy deal for today’s market.

Our generation needs to be aware that building plants to simply satisfy our water needs for tomorrow is not enough. Now is the time now that we must contribute to solving global water problems by establishing novel desalination and water treatment technology that would enable us to achieve the ultimate goal of solving the global water-energy-food problem.

However, we are still far from this situation. Mega projects are a solution for today and perhaps tomorrow, but we need to act to ensure that the day after tomorrow we have a solution at hand that is revolutionary if we compare it to the traditional approach.

Desalination is a fast-moving and dynamic industry as far as research and development is concerned. There are countless papers on the advantages of: Forward osmosis; Carbon nanotubes; Membrane distillation; Biomimetics and Renewable water generation associated to LT distillation or renewable power.

In reality, however, the chance of having one of these technologies installed and operating in one of the future mega projects is very slim. While we are currently installing projects on a “need oriented” drive, we need to “seed drive” the technologies that are going to make these projects viable In the future. We cannot delegate this work to future generations; the time available is short. We need to have a set of small to medium size plants online by 2014-2015 in order to have a sustainable mega project operational in 2020.

This will also be needed to have a bank that will be willing to finance a deal with some of the new concepts that the research and development sector is now cultivating.

These plants will need to be capable of validating the technology, providing feedback to operation and further design refinement. All stakeholders need to be engaged in this endeavor. The government needs to have a solution that is going to address long-term problems in a financially and environmentally sustainable way without compromising the robustness and consistency of the solution.

The importance of this approach needs to be understood and embraced at all levels and all stakeholders need to follow up on this call. Industry and developers need to find ways to invest in these solutions with an equitable distribution of risks between themselves. Banks must also find investment in new water technologies a reasonable deal – as was the case with investing in new renewable power generation technologies a long time ago.

There is a lot of work needed to make sure that desalination is “a promise for the future”. However, it is a promise that can be kept if all stakeholders are willing to think ahead and recognize the potential of new configurations, new technologies and new shared responsibilities for addressing our future water challenges.

Author’s note: Dr. Corrado Sommariva is President of the International Desalination Association (IDA). He can be reached at Corrado.Sommariva@ilf.com. For more information about IDA, visit www.idadesal.org.

How Quantum Dots could be in and will help your next TV

QDOTS imagesCAKXSY1K 8Nano-tech that could be in your next television

by Geferry Morrison: Posted February 18 2013


At CES in January, Sony announced several LCD TVs with “Triluminos,” a new backlighting method that they promise offered “rich, authentic color, and excellent red and green reproduction.” Digging deeper, it turns out Triluminos includes an optical component produced by QD Vision, Inc. called “Color IQ” which uses quantum dots to help create light.

OK, so what are quantum dots?

Quantum dots are a “semiconductor nanocrystal technology.” If you remember your high school (college?) physics, avail yourself of the Wiki page.

If you don’t know your valence bands from your conduction bands, you can think of a quantum dot as this: tiny pieces of matter with unique properties, including the ability to emit light at very specific wavelengths. Sort of like microscopic pieces of glitter that glow green, red, or blue depending on their size.


Red, green, and blue spectra for red, green, and blue quantum dots.

(Credit: QD Vision)

Specific wavelengths of light are good. We need specific wavelengths of light, the more specific the better. All televisions create an image by combining the three additive primary colors: red, green, and blue (RGB). Sharp adds yellow, a secondary color, but this isn’t in any content and is created by the TV. Mixing RGB in varying amounts gives us all the colors possible in our current TV system.

All LCDs create these colors with filters. Plasma displays create them with phosphors that glow in the required color (similar to the way CRT tube TVs worked). OLED, depending on the company, is one or the other. LG’s method creates a “white” OLED then adds color filters. Samsung‘s method has specific red, green, and blue OLED sub-pixels.

So where do quantum dots come in? Sony has a method.

Sony’s X900 and W900 lines Three of Sony’s 2013 TVs will use quantum dots in their backlighting, in the guise of QD Vision’s Color IQ tech (the 65X900, 55X900, and 55W900). A traditional LED LCD uses blue LEDs, coated with a yellow phosphor, to create “white” light. While reasonably efficient compared to other technologies (i.e. CCFL LCDs and plasmas), this still creates a lot of “wasted” energy. Orange, for example, doesn’t make it past the color filters on the front of the TV (instead, red and green are combined to create orange).

Triluminos uses blue LEDs, but instead of coating them with a yellow phosphor, the blue light from the LEDs passes through the Color IQ optical element containing red and green quantum dots. So the blue LEDs have two functions: create blue light, but also energize red- and green-emitting quantum dots so they in turn can create red and green light. About two-thirds of the light created by the blue LEDs is used to excite the QDs. Cool, right?


This diagram is a top-down view of one side of two edge-lit LCDs (the front is “up” in this case, the back is “down”). The upper image shows a traditional “white” LED (blue, with yellow phosphor). The lower image is the method used in Sony’s Triluminos: a blue LED that passes through red and green quantum dots. This RGB light bounces off the light guide, and out through the liquid crystal and other layers just like a regular LCD TV.

(Credit: QD Vision)


If you’re curious about how LCD backlighting works, check out Is LCD and LED LCD HDTV uniformity a problem? which has images and diagrams of how backlights work. Oh, and if the “Triluminos” name sounds familiar, Sony has used it before. This time, as shown, it’s referring to an edge-lighting technology, not the RGB LED backlighting as in 2008.

Sony claims this allows for a wider color gamut compared to LCD TVs using “white” LEDs, as in more potential colors. Since all modern TVs are fully capable of reproducing every color in all current HDTV content, this is a bit of marketing hyperbole.

However, the benefits of this could go beyond cool, futuristic tech and WowNeeto-based marketing. When I’ve reviewed LED-lit projectors, I’ve found that the color possible from RGB LEDs looks more realistic than the same Rec. 709-calibrated colors created by color filters (DLP) or dichromatic mirrors (LCD/LCOS) as lit by UHP lamps. One TV engineer I asked about this phenomenon replied “LEDs are like painting with purer paint.”

Our own David Katzmaier often remarks in his reviews on the bluish cast seen on some conventional LED-based TVs compared to, say, plasma sets. “It’s usually most prevalent in dark areas, but I sometimes see a slight bluish ‘coldness’ in brighter material and skin tones too. In some cases I see it despite seemingly excellent color measurements from my instruments.”

So it’s possible that even with the same measured color points, quantum dot-enhanced displays could produce more realistic color. Will they? Will the color mixing required to create Rec. 709 from wildly oversaturated color points cause other issues? What effect will the color filters, which are still necessary on LCDs, have on this “purer” light? These are questions we can’t answer until we see the X900 series, and any future TVs with quantum dots.

This whole column and not one “Quantum Leap” joke. Oh dammit.


Atomic Force Microscopy (AFM) image of sparse QDs (white) on a semiconductor background (black). Individual QDs, as well as close-packed small groups can be resolved.

(Credit: MIT)

The current generation of quantum dot technology requires a primary light source like the blue LEDs in Sony’s Triluminos. This won’t always necessarily be the case. It will be possible to excite the quantum dots directly. This could be a full QD backlight, but it could be more. How about a direct-emissive display like OLED, but instead of Organic Light-Emitting Diodes, it’s sub-pixels filled with red, green, or blue quantum dots. QD Vision calls this a “QLED,” and it could have similar performance characteristics as OLED (like a truly infinite contrast ratio). Will it be easier to produce, offer better color, or have even lower power consumption? At this point, we have no idea. Given the production difficulties OLED has had, just the fact that there’s something on the horizon that could offer potentially similar performance is exciting.

Bottom line Unlike many of the new technologies on display at CES every year, quantum dots are real, and are potentially very cool. For now they reside only in a few high-end LCDs, but like OLED, they could hint at what a display of the future might be. Will they? We shall see.