Deep learning–based data analysis software by ORNL promises to accelerate materials research: AI Applications


AI Generated Image: Neural Networks

Researchers at the Department of Energy’s Oak Ridge National Laboratory have developed a machine-learning inspired software package that provides end-to-end image analysis of electron and scanning probe microscopy images.

Known as AtomAI, the package applies deep learning to microscopy data at atomic resolutions, thereby providing quantifiable physical information such as the precise position and type of each atom in a sample.

Using these methods, researchers can quickly derive statistically meaningful information from immensely complex datasets. These datasets routinely include hundreds of images that each contain thousands of atoms and abnormalities in molecular structure.

This improvement to data analysis allows researchers to engineer quantum atomically precise abnormalities in materials, and can be used to gain deeper insights into the materials’ physical and chemical qualities.

Electron microscopy and scanning probe microscopy allow materials scientists, physicists and other researchers to probe atomic and molecular structures at extremely high resolutions. These high-resolution methods allow researchers to clearly observe atomic structures, making them an important tool in understanding and engineering materials at the nanoscale.

Electron microscopy is useful for gaining precise information on the structure of a material, whereas scanning probe microscopy is more often used to learn about a material’s functional properties, such as superconductivity or magnetism. Both methods benefit from modern image analysis methods.

Deep learning is a kind of machine learning that allows a program to train itself to accurately identify the contents of an image or block of text. When traditional machine learning is applied to image analysis, relevant features are manually extracted from a set of images — a process known as feature engineering — and used to create a model that categorizes objects based on those features.

In contrast, deep learning models automatically learn relevant features by using a network of layered “neurons” — biologically-inspired nodes through which data and computations flow — trained to detect various aspects of an image at different levels of complexity.

This allows for increased precision and the analysis of more diverse information when compared with traditional machine learning, so long as enough data exists to train the system.

AtomAI, which was developed partially at ORNL’s Center for Nanophase Materials Science, includes a unique model architecture to identify thin objects such as nanofibers or domain walls — the interfaces separating magnetic domains — in microscopy data. The software package is also built to reduce errors in image processing by accounting for unintended changes in the image data, such as incoming cosmic rays or images of non-target materials, and by incorporating certain unchanging physical characteristics into the model.

Also, AtomAI includes tools that allow researchers to conduct real-time analysis of the data being gathered and to import information directly into theoretical simulations. These features are useful for gaining insights into the energetics and optical, electronic and magnetic properties of the physical structures being observed.

An overview of the software package was published in Nature Machine Intelligence.

“People used to treat microscopy as a purely qualitative tool,” said Maxim Ziatdinov, a researcher at ORNL and the lead developer of AtomAI, “but recently there was sort of a shift in the mindset of the community: Microscopy can actually be used to extract quantitative physical information.” Qualitative microscopy methods generally rely on researchers noting holistic information about a sample, while the new quantitative methods allow for precise numerical representations of a material’s structure or properties.

Software such as AtomAI has enabled this shift, providing a method to quantitatively analyze the structures of whole samples to find meaningful data that qualitative methods might misrepresent or simply not notice.

Ziatdinov expects his team’s work to accelerate the rate of progress in both fundamental and applied materials research. “If you can characterize things faster and automate at least some parts of the process, then you will also speed up all aspects of materials science research,” he said.

AtomAI was designed with ease of use and access in mind. The entire package can be launched from a browser and requires minimal coding knowledge to operate.

“If you’re an experimentalist, then you should be able to use machine learning without knowing all the math behind the process and without necessarily being a good coder,” Ziatdinov said.

AtomAI is a complete software package, and Ziatdinov and his team expect to expand its capabilities and support more features. He is particularly interested in adding functionality for theoretical researchers looking for accurate estimates of structures’ physical characteristics, such as energetics and stability, without going through the time-consuming and expensive process of traditional simulations.

Ziatdinov is also looking forward to hearing directly from other researchers about their needs and ideas for AtomAI and is working with them to integrate new features into the software package.

ORNL is managed by UT-Battelle for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science. – Galen Fader

The Latest Genesis Nanotech Online Is Out! “It’s All About the Mind” – Articles Like “Tuning into brainwave rhythms speeds up learning in adults U of Cambridge + More ….


The brainwaves experiment set-up in the Adaptive Brain Lab, led by Prof Zoe Kourtzi, in the University of Cambridge’s Department of Psychology. Credit: University of Cambridge.

Genesis Nanotech – “Great Things from Small Things”

Scientists have shown for the first time that briefly tuning into a person’s individual brainwave cycle before they perform a learning task dramatically boosts the speed at which cognitive skills improve.

Calibrating rates of information delivery to match the natural tempo of our brains increases our capacity to absorb and adapt to new information, according to the team behind the study.

University of Cambridge researchers say that these techniques could help us retain “neuroplasticity” much later in life and advance lifelong learning.

Read the rest of this Article and More in Genesis Nanotech Online at: Genesis Nanotechnology Online

A fairy-like robot flies by the power of wind and light

Artificial Intelligence (AI) Discovers New Nanostructures – Brookhaven Center for Functional Nanomaterials


Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have successfully demonstrated that autonomous methods can discover new materials.

The artificial intelligence (AI)-driven technique led to the discovery of three new nanostructures, including a first-of-its-kind nanoscale “ladder.” The research was published today in Science Advances.

The newly discovered structures were formed by a process called self-assembly, in which a material’s molecules organize themselves into unique patterns. Scientists at Brookhaven’s Center for Functional Nanomaterials (CFN) are experts at directing the self-assembly process, creating templates for materials to form desirable arrangements for applications in microelectronics, catalysis, and more. Their discovery of the nanoscale ladder and other new structures further widens the scope of self-assembly’s applications.

“Self-assembly can be used as a technique for nanopatterning, which is a driver for advances in microelectronics and computer hardware,” said CFN scientist and co-author Gregory Doerk. “These technologies are always pushing for higher resolution using smaller nanopatterns. You can get really small and tightly controlled features from self-assembling materials, but they do not necessarily obey the kind of rules that we lay out for circuits, for example. By directing self-assembly using a template, we can form patterns that are more useful.”

Staff scientists at CFN, which is a DOE Office of Science User Facility, aim to build a library of self-assembled nanopattern types to broaden their applications. In previous studies, they demonstrated that new types of patterns are made possible by blending two self-assembling materials together.

“The fact that we can now create a ladder structure, which no one has ever dreamed of before, is amazing,” said CFN group leader and co-author Kevin Yager. “Traditional self-assembly can only form relatively simple structures like cylinders, sheets, and spheres. But by blending two materials together and using just the right chemical grating, we’ve found that entirely new structures are possible.”

Blending self-assembling materials together has enabled CFN scientists to uncover unique structures, but it has also created new challenges. With many more parameters to control in the self-assembly process, finding the right combination of parameters to create new and useful structures is a battle against time. To accelerate their research, CFN scientists leveraged a new AI capability: autonomous experimentation.

In collaboration with the Center for Advanced Mathematics for Energy Research Applications (CAMERA) at DOE’s Lawrence Berkeley National Laboratory, Brookhaven scientists at CFN and the National Synchrotron Light Source II (NSLS-II), another DOE Office of Science User Facility at Brookhaven Lab, have been developing an AI framework that can autonomously define and perform all the steps of an experiment. CAMERA’s gpCAM algorithm drives the framework’s autonomous decision-making. The latest research is the team’s first successful demonstration of the algorithm’s ability to discover new materials.

“gpCAM is a flexible algorithm and software for autonomous experimentation,” said Berkeley Lab scientist and co-author Marcus Noack. “It was used particularly ingeniously in this study to autonomously explore different features of the model.”

“With help from our colleagues at Berkeley Lab, we had this software and methodology ready to go, and now we’ve successfully used it to discover new materials,” Yager said. “We’ve now learned enough about autonomous science that we can take a materials problem and convert it into an autonomous problem pretty easily.”

To accelerate materials discovery using their new algorithm, the team first developed a complex sample with a spectrum of properties for analysis. Researchers fabricated the sample using the CFN nanofabrication facility and carried out the self-assembly in the CFN material synthesis facility.

“An old school way of doing material science is to synthesize a sample, measure it, learn from it, and then go back and make a different sample and keep iterating that process,” Yager said. “Instead, we made a sample that has a gradient of every parameter we’re interested in. That single sample is thus a vast collection of many distinct material structures.”

Then, the team brought the sample to NSLS-II, which generates ultrabright X-rays for studying the structure of materials. CFN operates three experimental stations in partnership with NSLS-II, one of which was used in this study, the Soft Matter Interfaces (SMI) beamline.

“One of the SMI beamline’s strengths is its ability to focus the X-ray beam on the sample down to microns,” said NSLS-II scientist and co-author Masa Fukuto. “By analyzing how these microbeam X-rays get scattered by the material, we learn about the material’s local structure at the illuminated spot. Measurements at many different spots can then reveal how the local structure varies across the gradient sample. In this work, we let the AI algorithm pick, on the fly, which spot to measure next to maximize the value of each measurement.”

As the sample was measured at the SMI beamline, the algorithm, without human intervention, created of model of the material’s numerous and diverse set of structures. The model updated itself with each subsequent X-ray measurement, making every measurement more insightful and accurate.

In a matter of hours, the algorithm had identified three key areas in the complex sample for the CFN researchers to study more closely. They used the CFN electron microscopy facility to image those key areas in exquisite detail, uncovering the rails and rungs of a nanoscale ladder, among other novel features.

From start to finish, the experiment ran about six hours. The researchers estimate they would have needed about a month to make this discovery using traditional methods.

Calvin: “Sometimes I think the surest sign that Intelligent Life exists elsewhere in the Universe is that so far …. None of it has tried to contact us!”

“Autonomous methods can tremendously accelerate discovery,” Yager said. “It’s essentially ‘tightening’ the usual discovery loop of science, so that we cycle between hypotheses and measurements more quickly. Beyond just speed, however, autonomous methods increase the scope of what we can study, meaning we can tackle more challenging science problems.”

“Moving forward, we want to investigate the complex interplay among multiple parameters. We conducted simulations using the CFN computer cluster that verified our experimental results, but they also suggested how other parameters, such as film thickness, can also play an important role,” Doerk said.

The team is actively applying their autonomous research method to even more challenging material discovery problems in self-assembly, as well as other classes of materials. Autonomous discovery methods are adaptable and can be applied to nearly any research problem.

“We are now deploying these methods to the broad community of users who come to CFN and NSLS-II to conduct experiments,” Yager said. “Anyone can work with us to accelerate the exploration of their materials research. We foresee this empowering a host of new discoveries in the coming years, including in national priority areas like clean energy and microelectronics.”

Source Nano Mag

DNA Nanotechnology Tools: From Design to Applications: Current Opportunities and Collaborations – Wyss Institute – Harvard University


Suite of DNA nanotechnology devices engineered to overcome specific bottlenecks in the development of new therapies, diagnostics, and understanding of molecular structures

Lead Inventors

William Shih Wesley Wong

Advantages

  • DNA as building blocks
  • Broad applications
  • Low cost with big potential
DNA Nanotechnology Tools: From Design to Applications

DNA nanostructures with their potential for cell and tissue permeability, biocompatibility, and high programmability at the nanoscale level are promising candidates as new types of drug delivery vehicles, highly specific diagnostic devices, and tools to decipher how biomolecules dynamically change their shapes, and interact with each other and with candidate drugs. Wyss Institute researchers are providing a suite of diverse, multifunctional DNA nanotechnological tools with unique capabilities and potential for a broad range of clinical and biomedical research areas.

DNA nanotechnological devices for therapeutic drug delivery

DNA nanostructures have future potential to be widely used to transport and present a variety of biologically active molecules such as drugs and immune-enhancing antigens and adjuvants to target cells and tissues in the human body.

DNA origami as high-precision delivery components of cancer vaccines


The Wyss Institute has developed cancer vaccines to improve immunotherapies. These approaches use implantable or injectable biomaterial-based scaffolds that present tumor-specific antigens, and biomolecules that attract dendritic immune cells (DCs) into the scaffold, and activate them so that after their release they can orchestrate anti-tumor T cell responses against tumors carrying the same antigens. To be activated most effectively, DCs likely need to experience tumor antigens and immune-boosting CpG adjuvant molecules at particular ratios (stoichiometries) and configurations that register with the density and distribution of receptor molecules on their cell surface.

Specifically developed DNA origami, programmed to assemble into rigid square-lattice blocks that co-present tumor antigens and adjuvants to DCs within biomaterial scaffolds with nanoscale precision have the potential to boost the efficacy of therapeutic cancer vaccines, and can be further functionalized with anti-cancer drugs.

Chemical modification strategy to protect drug-delivering DNA nanostructures


DNA nanostructures such as self-assembling DNA origami are promising vehicles for the delivery of drugs and diagnostics. They can be flexibly functionalized with small molecule and protein drugs, as well as features that facilitate their delivery to specific target cells and tissues. However, their potential is hampered by their limited stability in the body’s tissues and blood. To help fulfill the extraordinary promise of DNA nanostructures, Wyss researchers developed an easy, effective and scalable chemical cross-linking approach that can provide DNA nanostructures with the stability they need as effective vehicles for drugs and diagnostics.

In two simple cost-effective steps, the Wyss’ approach first uses a small-molecule, unobtrusive neutralizing agent, PEG-oligolysine, that carries multiple positive charges, to cover DNA origami structures. In contrast to commonly used Mg2+ions that each neutralize only two negative changes in DNA structures, PEG-oligolysine covers multiple negative charges at one, thus forming a stable “electrostatic net,” which increases the stability of DNA nanostructures about 400-fold. Then, by applying a chemical cross-linking reagent known as glutaraldehyde, additional stabilizing bonds are introduced into the electrostatic net, which increases the stability of DNA nanostructures by another 250-fold, extending their half-life into a range that is compatible with a broad range of clinical applications.

DNA nanotechnological devices as ultrasensitive diagnostic and analytical tools

The generation of detectable DNA nanostructures in response to a disease or pathogen-specific nucleic acids, in principle, offers a means for highly effective biomarker detection in diverse samples. A single molecule binding event of a synthetic oligonucleotide to a target nucleic acid can nucleate the creation of much larger structures by the cooperative assembly of smaller synthetic DNA units like DNA tiles or bricks into larger structures that then can be visualized in simple laboratory assays. However, a central obstacle to these approaches is the occurrence of (1) non-specific binding and (2) non-specific nucleation events in the absence of a specific target nucleic acid which can lead to false-positive results. Wyss DNA nanotechnologists have developed two separately applicable but combinable solutions for these problems.

Digital counting of biomarker molecules with DNA nanoswitch catenanes


To enable the initial detection (binding) of biomarkers with ultra-high sensitivity and specificity, Wyss researchers have developed a type of DNA nanoswitch that, designed as a larger catenane (Latin catenameaning chain), is assembled from mechanically interlocked ring-shaped substructures with specific functionalities that together enable the detection and counting of single biomarker molecules. In the “DNA Nanoswitch Catenane” structure, both ends of a longer synthetic DNA strand are linked to two antibody fragments that each specifically bind different parts of the same biomarker molecule of interest, thus allowing for high target specificity and sensitivity.

This bridging-event causes the strand to close into a “host ring,” which it is interlocked at different regions with different “guest rings.” Closing of the host ring switches the guest rings into a configuration that allows the synthesis of a new DNA strand. The newly synthesized diagnostic strand then can be unambiguously detected as a single digital molecule count, while disrupting the antibody fragment/biomarker complex starts a new biomarker counting cycle. Both, the target binding specificity and the synthesis of a target-specific DNA strand also enable the combination of multiple DNA nanoswitch catenanes to simultaneously count different biomarker molecules in a single multiplexed reaction.

For ultrasensitive diagnostics, it is desirable to have the fastest amplification and the lowest rate of spurious nucleation. DNA nanotechnology approaches have the potential to deliver this in an enzyme-free, low-cost manner.

WILLIAM SHIH

A rapid amplification platform for diverse biomarkers


A rapid, low-cost and enzyme-free detection and amplification platform avoids non-specific nucleation and amplification and allows the self-assembly of much larger micron-scale structures from a single seed in just minutes. The method, called “Crisscross Nanoseed Detection” enables the ultra-cooperative assembly of ribbons starting from a single biomarker binding event. The micron-scale structures are densely woven from single-stranded “DNA slats,” whereby an inbound slat snakes over and under six or more previously captured slats on a growing ribbon end in a “crisscross” manner, forming weak but highly-specific interactions with its interacting DNA slats. The nucleation of the assembly process is strictly target-seed specific and the assembly can be carried out in a one-step reaction in about 15 minutes without the addition of further reagents, and over a broad range of temperatures. Using standard laboratory equipment, the assembled structures then can be rapidly visualized or otherwise detected, for example, using high-throughput fluorescence plate reader assays.

null

CURRENT OPPORTUNITY – STARTUP

Crisscross Nanoseed Detection: Nanotechnology-Powered Infectious Disease Diagnostics

Enzyme-free DNA nanotechnology for rapid, ultrasensitive, and low-cost detection of infectious disease biomarkers with broad accessibility in point-of-care settings.

The DNA assembly process in the Crisscross Nanoseed Detection method can also be linked to the action of DNA nanoswitch catenanes that highly specifically detect a biomarker molecule leading to preservation of a molecular record. Each surviving record can nucleate the assembly of a crisscross nanostructure, combining high-specificity binding with amplification for biomarker detection.

Wyss researchers are currently developing the approach as a multiplexable low-cost diagnostic for the COVID-19 causing SARS-CoV-2 virus and other pathogens that could give accurate results faster and at lower costs than currently used techniques.

Nanoscale devices for determining the structure and identity of proteins at the single-molecule level

The ability to identify and quantify proteins from trace biological samples would have a profound impact on both basic research and clinical practice, from monitoring changes in protein expression within individual cells, to enabling the discovery of new biomarkers of disease. Furthermore, the ability to also determine their structures and interactions would open up new avenues for drug discovery and characterization. Over the past decades, developments in DNA analysis and sequencing have unquestionably revolutionized medicine – yet equivalent developments for protein analysis have remained a challenge. While methods such as mass spectrometry for protein identification, and cryoEM for structure determination have rapidly advanced, challenges remain regarding resolution and the ability to work with trace heterogeneous samples.

To help meet this challenge, researchers at the Wyss Institute have developed a new approach that combines DNA nanotechnology with single-molecule manipulation to enable the structural identification and analysis of proteins and other macromolecules. “DNA Nanoswitch Calipers” (DNCs) offer a high-resolution approach to “fingerprint proteins” by measuring distances and determining geometries within single proteins in solution. DNCs are nanodevices designed to measure distances between DNA handles that have been attached to target molecules of interest. DNC states can be actuated and read out using single-molecule force spectroscopy, enabling multiple absolute distance measurements to be made on each single-molecule.

DNCs could be widely adapted to advance research in different areas, including structural biology, proteomics, diagnostics and drug discovery.

All technologies are in development and available for industry collaborations.

WARNING! AI-armed cyberattacks may become lethal in just 5 years from now


A recent cyber analytical report has warned that artificial intelligence (AI) enabled cyberattacks which are quite limited until now, may get more aggressive in the coming years.

The Helsinki-based cybersecurity and privacy firm WithSecure, the Finnish Transport and Communications Agency, and the Finnish National Emergency Supply Agency collaborated on the report, according to an article by Cybernews on Thursday.

AI-powered assaults will definitely excel at impersonation, a tactic utilized frequently in phishing, as per the study.

“Although AI-generated content has been used for social engineering purposes, AI techniques designed to direct campaigns, perform attack steps, or control malware logic have still not been observed in the wild, said Andy Patel WithSecure intelligence researcher.

Such “techniques will be first developed by well-resourced, highly-skilled adversaries, such as nation-state groups.” 

AI-armed cyberattacks may get lethal in next 5 years, warns report

The paper examined current trends and advancements in AI, cyberattacks, and areas where the two intersect, suggesting early adoption and evolution of preventative measures were key to overcoming the threats. 

“After new AI techniques are developed by sophisticated adversaries, some will likely trickle down to less-skilled adversaries and become more prevalent in the threat landscape,” stated Patel. 

The threat in the next five years

The authors claim that it is safe to assert that AI-based hacks are now extremely uncommon and mostly used for social engineering purposes. However, they are also employed in ways that analysts and researchers cannot directly observe. 

The majority of current AI disciplines do not come near to human intellect and cannot autonomously plan or carry out cyberattacks.

However, attackers will likely create AI in the next five years that can autonomously identify vulnerabilities, plan and carry out attack campaigns, use stealth to avoid defenses, and gather or mine data from infected systems or open-source intelligence.

“AI-enabled attacks can be run faster, target more victims and find more attack vectors than conventional attacks because of the nature of intelligent automation and the fact that they replace typically manual tasks,” said the report.

New methods are required to combat AI-based hacking that makes use of synthetic information, spoofs biometric authentication systems, and other upcoming capabilities, according to the paper. 

AI-powered deepfakes 

AI-powered assaults will definitely excel at impersonation, a tactic utilized frequently in phishing and vishing (voice phishing) cyberattacks, noted the report. 

“Deepfake-based impersonation is an example of new capability brought by AI for social engineering attacks,” claimed the report’s authors, who forecast that impersonations made possible by AI will advance further.

“No prior technology enabled to convincingly mimic the voice, gestures, and image of a target human in a manner that would deceive victims.” 

Many tech experts believe that deepfakes are the biggest cybersecurity concern. 

They have a strong shot at it because phone locks to bank accounts and passports, as well as all recent technical developments, have migrated toward biometric technologies.

Given how quickly deepfakes are developing, security systems that primarily rely on such technology appear to be at higher risk.

There were 1,291 data breaches until September 2021, according to the Identity Theft Resource Center’s (ITRC) study of data breaches. 

In comparison to data breaches in 2020, which totaled 1,108, this figure shows a 17 percent increase. 

281 million victims of data compromise were discovered during the first nine months of 2021, according to ITRC research, a sharp increase.

MIT: Biotech labs are using AI Inspired by DALL-E to Invent New Drugs


The explosion in AI models like OpenAI’s DALL-E 2—programs trained to generate pictures of almost anything you ask for—has sent ripples through the creative industries, from fashion to filmmaking, by providing weird and wonderful images on demand.

The same technology behind these programs is also making a splash in biotech labs, which have started using this type of generative AI, known as a diffusion model, to conjure up designs for new types of protein never seen in nature.

Related Story

DeepMind’s protein-folding AI has solved a 50-year-old grand challenge of biology

AlphaFold can predict the shape of proteins to within the width of an atom. The breakthrough will help scientists design drugs and understand disease.

Today, two labs separately announced programs that use diffusion models to generate designs for novel proteins with more precision than ever before. Generate Biomedicines, a Boston-based startup, revealed a program called Chroma, which the company describes as the “DALL-E 2 of biology.”

At the same time, a team at the University of Washington led by biologist David Baker has built a similar program called RoseTTAFold Diffusion. In a preprint paper posted online today, Baker and his colleagues show that their model can generate precise designs for novel proteins that can then be brought to life in the lab. “We’re generating proteins with really no similarity to existing ones,” says Brian Trippe, one of the co-developers of RoseTTAFold.

These protein generators can be directed to produce designs for proteins with specific properties, such as shape or size or function. In effect, this makes it possible to come up with new proteins to do particular jobs on demand. Researchers hope that this will eventually lead to the development of new and more effective drugs. “We can discover in minutes what took evolution millions of years,” says Gevorg Grigoryan, CTO of Generate Biomedicines.

“What is notable about this work is the generation of proteins according to desired constraints,” says Ava Amini, a biophysicist at Microsoft Research in Cambridge, Massachusetts. 

Symmetrical protein structures generated by Chroma

Proteins are the fundamental building blocks of living systems. In animals, they digest food, contract muscles, detect light, drive the immune system, and so much more. When people get sick, proteins play a part. 

Proteins are thus prime targets for drugs. And many of today’s newest drugs are protein based themselves. “Nature uses proteins for essentially everything,” says Grigoryan. “The promise that offers for therapeutic interventions is really immense.”

But drug designers currently have to draw on an ingredient list made up of natural proteins. The goal of protein generation is to extend that list with a nearly infinite pool of computer-designed ones.

Computational techniques for designing proteins are not new. But previous approaches have been slow and not great at designing large proteins or protein complexes—molecular machines made up of multiple proteins coupled together. And such proteins are often crucial for treating diseases.  

A protein structure generated by RoseTTAFold Diffusion (left) and the same structure created in the lab (right)

The two programs announced today are also not the first use of diffusion models for protein generation. A handful of studies in the last few months from Amini and others have shown that diffusion models are a promising technique, but these were proof-of-concept prototypes. Chroma and RoseTTAFold Diffusion build on this work and are the first full-fledged programs that can produce precise designs for a wide variety of proteins.

Namrata Anand, who co-developed one of the first diffusion models for protein generation in May 2022, thinks the big significance of Chroma and RoseTTAFold Diffusion is that they have taken the technique and supersized it, training on more data and more computers. “It may be fair to say that this is more like DALL-E because of how they’ve scaled things up,” she says.

Diffusion models are neural networks trained to remove “noise”—random perturbations added to data—from their input. Given a random mess of pixels, a diffusion model will try to turn it into a recognizable image.

In Chroma, noise is added by unraveling the amino acid chains that a protein is made from. Given a random clump of these chains, Chroma tries to put them together to form a protein. Guided by specified constraints on what the result should look like, Chroma can generate novel proteins with specific properties.

Baker’s team takes a different approach, though the end results are similar. Its diffusion model starts with an even more scrambled structure. Another key difference is that RoseTTAFold Diffusion uses information about how the pieces of a protein fit together provided by a separate neural network trained to predict protein structure (as DeepMind’s AlphaFold does). This guides the overall generative process. 

Generate Biomedicines and Baker’s team both show off an impressive array of results. They are able to generate proteins with multiple degrees of symmetry, including proteins that are circular, triangular, or hexagonal. To illustrate the versatility of their program, Generate Biomedicines generated proteins shaped like the 26 letters of the Latin alphabet and the numerals 0 to 10. Both teams can also generate pieces of proteins, matching new parts to existing structures.

Related Story

I Was There When: AI helped create a vaccine

Most of these demonstrated structures would serve no purpose in practice. But because a protein’s function is determined by its shape, being able to generate different structures on demand is crucial.

Generating strange designs on a computer is one thing. But the goal is to turn these designs into real proteins. To test whether Chroma produced designs that could be made, Generate Biomedicines took the sequences for some of its designs—the amino acid strings that make up the protein—and ran them through another AI program. They found that 55% of them would be predicted to fold into the structure generated by Chroma, which suggests that these are designs for viable protein.

Baker’s team ran a similar test. But Baker and his colleagues have gone a lot further than Generate Biomedicines in evaluating their model. They have created some of RoseTTAFold Diffusion’s designs in their lab. (Generate Biomedicines says that it is also doing lab tests but is not yet ready to share results.) “This is more than just proof of concept,” says Trippe. “We’re actually using this to make really great proteins.”

IAN C HAYDON / UW INSTITUTE FOR PROTEIN DESIGN

For Baker, the headline result is the generation of a new protein that attaches to the parathyroid hormone, which controls calcium levels in the blood. “We basically gave the model the hormone and nothing else and told it to make a protein that binds to it,” he says. When they tested the novel protein in the lab, they found that it attached to the hormone more tightly than anything that could have been generated using other computational methods—and more tightly than existing drugs. “It came up with this protein design out of thin air,” says Baker. 

Grigoryan acknowledges that inventing new proteins is just the first step of many. We’re a drug company, he says. “At the end of the day what matters is whether we can make medicines that work or not.” Protein based drugs need to be manufactured in large numbers, then tested in the lab and finally in humans. This can take years. But he thinks that his company and others will find ways to speed up those steps up as well.

“The rate of scientific progress comes in fits and starts,” says Baker. “But right now we’re in the middle of what can only be called a technological revolution.”

UCF Researchers Develop Device That Mimics Brain Cells Used for Human Vision – AI for Autonomous Rescue Drones? Robotics?


The UCF-developed device is an important step in the fields of AI and robotics.

University of Central Florida researchers are helping to close the gap separating human and machine minds.

In a study featured as the cover article appearing today in the journal Science Advances, a UCF research team showed that by combining two promising nanomaterials into a new superstructure, they could create a nanoscale device that mimics the neural pathways of brain cells used for human vision.

“This is a baby step toward developing neuromorphic computers, which are computer processors that can simultaneously process and memorize information,” said Jayan Thomas, an associate professor in UCF’s NanoScience Technology Center and Department of Materials Science and Engineering. “This can reduce the processing time as well as the energy required for processing. At some time in the future, this invention may help to make robots that can think like humans.”

Thomas led the research in collaboration with Tania Roy, an assistant professor in UCF’s NanoScience Technology Center, and others at UCF’s NanoScience Technology Center and the Department of Materials Science and Engineering.

Roy said a potential use for the technology is for drone-assisted rescues.

“Imagine a drone that can fly without guidance to remote mountain sites and locate stranded mountaineers,” Roy said. “Today it is difficult since these drones need connectivity to remote servers to identify what they scan with their camera eye. Our device makes this drone truly autonomous because it can see just like a human.”

“Earlier research created a camera which captured the image and sent it to a server to be recognized, but our group created a single device that mimics the eye and the brain function together,” she said. “Our device can observe the image and recognize it on the spot.”

The trick to the innovation was growing nanoscale, light-sensitive perovskite quantum dots on the two-dimensional, atomic thick nanomaterial graphene. This combination allows the photoactive particles to capture light, convert it to electric charges and then have the charges directly transferred to the graphene, all in one step. The entire process takes place on an extremely thin film, about one-ten thousandths of the thickness of a human hair.

Basudev Pradhan, who was a Bhaskara Advanced Solar Energy fellow in Thomas’ lab and is currently an assistant professor in the Department of Energy Engineering at the Central University of Jharkhand in India, and Sonali Das, a postdoctoral fellow in Roy’s lab, are shared first authors of the study.

“Because of the nature of the superstructure, it shows a light-assisted memory effect,” Pradhan said. “This is similar to humans’ vision-related brain cells. The optoelectronic synapses we developed are highly relevant for brain-inspired, neuromorphic computing. This kind of superstructure will definitely lead to new directions in development of ultrathin optoelectronic devices.”

Das said there are also potential defense applications.

“Such features can also be used for aiding the vision of soldiers on the battlefield,” she said. “Further, our device can sense, detect and reconstruct an image along with extremely low power consumption, which makes it capable for long-term deployment in field applications.”

Neuromorphic computing is a long-standing goal of scientists in which computers can simultaneously process and store information, like the human brain does, for example, to allow vision. Currently, computers store and process information in separate places, which ultimately limits their performance.

To test their device’s ability to see objects through neuromorphic computing, the researchers used it in facial recognition experiments, Thomas said.

“The facial recognition experiment was a preliminary test to check our optoelectronic neuromorphic computing,” Thomas said. “Since our device mimics vision-related brain cells, facial recognition is one of the most important tests for our neuromorphic building block.”

They found that their device was able to successfully recognize the portraits of four different people.

The researchers said they plan to continue their collaboration to refine the device, including using it to develop a circuit-level system.

Study co-authors were Jinxin Li, Farzana Chowdhury, Jayesh Cherusseri, Deepak Pandey, Durjoy Dev, Adithi Krishnaprasad, Elizabeth Barrios, Andrew Towers, Andre Gesquiere, and Laurene Tetard.

Thomas joined UCF in 2011 and is a part of the NanoScience Technology Center with a joint appointment in the College of Optics and Photonics and the Department of Materials Science and Engineering in the College of Engineering. Previously, Thomas was at the University of Arizona in its College of Optical Sciences. He has several degrees including a doctorate in chemistry/materials science from Cochin University of Science and Technology in India.

Roy joined UCF in 2016 and is a part of the NanoScience Technology Center with a joint appointment in the Department of Materials Science and Engineering, the Department of Electrical and Computer Engineering and the Department of Physics. Her recent National Science Foundation CAREER award focuses on the development of devices for artificial intelligence applications. Roy was a postdoctoral scholar at the University of California, Berkeley prior to joining UCF. She received her doctorate in electrical engineering from Vanderbilt University.

Innovative AI Breath Analyzer Diagnoses Diseases by “Smell” – AI System to Detect 17 Diseases from Exhaled Breath with 86% Accuracy


AI csm_breathprint-system_2ed66eccfb

Re-Posted from Psychology Today: Author Cami Russo

Imagine being able to know if you have Parkinson’s disease, multiple sclerosis, liver failure, Crohn’s diseases, pulmonary hypertension, chronic kidney disease, or any number of cancers based on a simple, non-invasive test of your breath. Breath analyzers to detect alcohol have been around for well over half a century—why not apply the same concept to detect diseases? A global team of scientists from universities in Israel, France, Latvia, China and the United States have developed an artificial intelligence (AI) system to detect 17 diseases from exhaled breath with 86 percent accuracy.

The research team led by Professor Hassam Haick of the Technion-Israel Institute of Technology collected breath samples from 1404 subjects with either no disease (healthy control) or one of 17 different diseases. The disease conditions include lung cancer, colorectal cancer, head and neck cancer, ovarian cancer, bladder cancer, prostate cancer, kidney cancer, gastric cancer, Crohn’s disease, ulcerative colitis, irritable bowel syndrome, idiopathic Parkinson’s, atypical Parkinson ISM, multiple sclerosis, pulmonary hypertension, pre-eclampsia toxemia, and chronic kidney disease.

The concept is relatively simple—identify breath-prints of diseases, and compare it to human exhalation. What makes it complicated is the execution of the concept. For example, how to identify the breathprint of a disease? Is it unique like a fingerprint? To answer these questions requires a deeper look at the molecular composition of breath.

When we exhale, nitrogen, oxygen, carbon dioxide, argon, and water vapor are released. Human breath also contains volatile organic compounds (VOCs)–organic chemicals that are emitted as gases, and have a high vapor pressure at normal temperature. American biochemist Linus Pauling, one of the founders of modern quantum chemistry and molecular biology, and recipient of the 1954 Nobel Prize in Chemistry, and the 1962 Nobel Peace Prize, studied 250 human breath volatiles using a gas-liquid chromatogram in 1971. Pauling is widely regarded as a pioneer in modern breath analysis. Exhaled breath contains approximately over 3,500 components mostly comprised of VOCs in small quantities according to a 2011 study published in “Annals of Allergy, Asthma & Immunology.”

VOCs are the common factor in the smelling process for both breath analyzers and humans. When we inhale, the nose draws in odor molecules that typically contain volatile (easy to evaporate) chemicals. Once the odor molecules contact the olfactory epithelium tissue that lines the nasal cavity, it binds with the olfactory receptors and sends an electrical impulse to a spherical structure called the glomerulus in the olfactory bulb of the brain.

There are approximately 2,000 glomeruli near the surface of the olfactory bulb. Smell is the brain’s interpretation of the odorant patterns released from the glomerulus. The human nose can detect a trillion smells. In Haick’s researcher team, nanotechnology and machine learning replaces the biological brain in the smelling process.

Haick’s team of scientists developed a system, aptly called “NaNose,” that uses nanotechnology-based sensors trained to detect volatile organic compounds associated with select diseases in the study. NaNose has two layers. One is an inorganic nanolayer with nanotubes and gold nanoparticles for electrical conductivity. The other is an organic sensing layer with carbon that controls the electrical resistance of the inorganic layer based on the incoming VOCs. The electrical resistance changes depending on the VOCs.

Artificial intelligence (AI) is used to analyze the data. Specifically, deep learning is used to identify patterns in the data in order to match incoming signals with the chemical signature of specific diseases. The AI system was then trained on more than 8,000 patients in clinics with promising results—the system detected gastric cancer with 92-94 percent accuracy in a blinded test. The researchers discovered that “each disease has its own unique breathprint.”

Efforts are underway to miniaturize and commercialize the innovative technology developed by Haick’s team in a project called “SniffPhone.”  In November 2018, the European Commission’s Horizon 2020 awarded the SniffPhone the “2018 Innovation Award” for the “Most Innovative Project.”

The market opportunity for medical breath analyzers is expected to grow. By 2024, the breath analyzer market is projected to increase to USD 11.3 billion globally according to figures published in Jun 2018 by Grand View Research—alcohol detection has a majority of the revenue share. Currently breath analyzers are used to detect alcohol, drugs, and to diagnose asthma and gastroenteric conditions. Clinical applications are projected to increase due to the introduction of “introduction of advanced technologies to detect nitric oxide and carbon monoxide in breath,” Grand View Research states. According to the study, the medical application segment is expected to grow due to ability of breath analyzers to detect volatile organic compounds (VOCs) that may help in “early diagnosis of conditions including cardiopulmonary diseases and lung and breast cancer,” and act as “biomarkers to assess disease progressions.”

By applying cross-disciplinary innovative technologies from the fields of artificial intelligence, nanotechnology, and molecular chemistry, diagnosing a wide variety of diseases may be as simple and non-invasive as a breath analysis using a handheld device in the not-so-distant future.

 

New stable, transparent, and flexible electronic device that emulates essential synaptic behaviors, with potential for #AI in organic environments.


Structure and materials of the transparent and flexible synapses. a) Illustration of the identical bio-synapse and artificial synapse structures.

Waterproof artificial synapses for pattern recognition in organic environments

The two electrodes and the functional layer correspond to pre-synapse, post-synapse, and synaptic cleft, respectively. b) Schematic of the ITO/PEDOT:PSS/ITO flexible and transparent artificial synaptic device. c) Top and d) cross- sectional SEM images of the PEDOT:PSS film on the Si substrate. The film thickness was 42.18 nm. e) Schematic structure and f) Raman spectra of PEDOT:PSS. g) Transmittance spectrum of the PET/ITO, PET/ITO/PEDOT:PSS, and PET/ITO/PEDOT:PSS/ITO structures. h) AFM image (2×2 μm2) of the PEDOT:PSS film on the PET/ITO substrate. Root-mean-square average roughness (Rq) was 1.99 nm. Credit: Wang et al.

Most artificial intelligence (AI) systems try to replicate biological mechanisms and behaviors observed in nature. One key example of this is electronic synapses (e-synapses), which try to reproduce junctions between nerve cells that enable the transmission of electrical or chemical signals to target cells in the human body, known as synapses.

Over the past few years, researchers have simulated versatile functions using single physical devices. These devices could soon enable advanced learning and memory capabilities in machines, emulating functions of the human brain. 

Recent studies have proposed flexible, transparent and even bio-compatible electronic devices for pattern recognition, which could pave the way toward a new generation of wearable and  synaptic systems. These “invisible” e-synapses, however, come with a notable disadvantage: they easily dissolve in water or in organic solutions, which is far from ideal for wearable applications. 

To overcome this limitation, researchers at Fudan University in Shangai have set out to develop a new stable, flexible and waterproof synapse suitable for applications in organic environments. Their study, outlined in a paper published in the Royal Society of Chemistry’s Nanoscale Horizons journal, presents a new fully transparent electronic  that emulates essential synaptic behaviors, such as paired-pulse facilitation (PPF), long-term potentiation/depression (LTP/LTD) and learning-forgetting-relearning processes. 

“In the present work, a stable waterproof artificial synapse based on a fully transparent electronic device, suitable for wearable applications in an organic environment, is for the first time demonstrated,” the researchers wrote in their paper.

The flexible, fully transparent and waterproof device developed by the researchers has so far achieved remarkable results, with an optical transmittance of ~87.5 percent in the visible light range. It was also able to reliable replicate LTP/LTD processes under bended states. LTP/LTD are two processes affecting , which respectively entail an enhancement and decrease in synaptic strength. 

The researchers tested their synapses by immersing them in water and in five common organic solvents for over 12 hours. They found that they functioned with 6000 spikes without noticeable degradation. The researchers also used their e-synapses to develop a device-to-system-level simulation framework, which achieved a handwritten digit recognition accuracy of 92.4 percent. 

“The device demonstrated an excellent transparency of 87.5 percent at 550nm wavelength and flexibility at a radius of 5mm,” the researchers wrote in their paper. “Typical synaptic plasticity characteristics, including EPSC/IPSC, PPF and learning-forgetting-relearning processes, were emulated. Furthermore, the e-synapse exhibited reliable LTP/LTD behaviors at flat and bended states, even after being immersed in water and organic solvents for over 12 hours.” 

The device proposed by this team of researchers is the first “invisible” and waterproof e-synapse that can reliably operate in organic environments without any damage or deterioration. In the future, it could aid the development of new reliable brain-inspired neuromorphic systems, including  and implantable devices.

More information: Tian-Yu Wang et al. Fully transparent, flexible and waterproof synapses with pattern recognition in organic environments, Nanoscale Horizons (2019). DOI: 10.1039/C9NH00341J

© 2019 Science X Network

No cloud required: Why AI’s future is at the edge


For all the promise and peril of artificial intelligence, there’s one big obstacle to its seemingly relentless march:

The algorithms for running AI applications have been so big and complex that they’ve required processing on powerful machines in the cloud and data centers, making a wide swath of applications less useful on smartphones and other “edge” devices.

Now, that concern is quickly melting away, thanks to a series of breakthroughs in recent months in software, hardware and energy technologies. That’s likely to drive AI-driven products and services even further away from a dependence on powerful cloud-computing services and enable them to move into every part of our lives — even inside our bodies.

By 2022, 80% of smartphones shipped will have AI capabilities on the device itself, up from 10% in 2017, according to market researcher  Gartner Inc. And by 2023, that will add up to some 1.2 billion shipments of devices with on-device AI computing capabilities, up from 79 million in 2017, according to ABI Research.

evs19-aiapplicationprocessors

A lot of startups and their backers smell a big opportunity. According to Jeff Bier, founder of the Embedded Vision Alliance, which held a conference this past week in Silicon Valley, investors have plowed some $1.5 billion into new AI chip startups in the past three years — more than was invested in all chip startups in the previous three years.

Market researcher Yole Développement forecasts that AI application processors will enjoy a 46% compound annual growth rate through 2023, when nearly all smartphones will have them, from fewer than 20% today.

READ MORE:

AI: Real World Applications

And it’s not just startups. Just today, Intel Corp. previewed its coming Ice Lake chips, which among other things has “Deep Learning Boost” software and other new AI instructions on graphics processing unit.

“Within the next two years, virtually every processor vendor will be offering some kind of competitive platform for AI,” Tom Hackett, principal analyst at IHS Markit, said at the alliance’s Embedded Vision Summit. “We are now seeing a next-generation opportunity.”

hailo-8-vs-300x154

Those chips are finding their way into many more devices beyond smartphones. They’re also being used in millions of “internet of things” devices such as robots, drones, cars, cameras and wearables.

Among the 75 or so companies developing machine learning chips, for instance, is Israel’s Hailo, which raised a $21 million funding round in January. In mid-May it released a processor that’s tuned for deep learning, a branch of machine learning responsible for recent breakthroughs in voice and image recognition.

More compact and capable software is paving the way for AI at the edge as well. Google LLC, for instance debuted its TensorFlow Litemachine learning library for mobile devices in late 2017, enabling the potential for smart cameras to can identify wildlife or imaging devices to can make medical diagnoses even where there’s no internet connection.

Some 2 billion mobiles now have TensorFlow Lite deployed on them, Google staff research engineer Pete Warden said at a keynote presentation at the Embedded Vision Summit.

And in March, Google rolled out an on-device speech recognizer to power speech input in Gboard, Google’s virtual keyboard app. The automatic speech recognition transcription algorithm is now down to 80 megabytes so it can run on the Arm Ltd. A-series chip inside a typical Pixel phone, and that means it works offline so there’s no network latency or spottiness.

Not least, rapidly rising privacy concerns about data traversing the cloud means there’s also a regulatory reason to avoid moving data off the devices.

“Virtually all the machine learning processing will be done on the device,” said Bier, who’s also co-founder and president of Berkeley Design Technology Inc., which provides analysis and engineering services for embedded digital signal processing technology. And there will be a whole lot of devices: Warden cited an estimate of 250 billion active embedded devices in the world today, and that number is growing 20% a year.

Google's Pete Warden (Photo: Robert Hof/SiliconANGLE)

Google’s Pete Warden (Photo: Robert Hof/SiliconANGLE)

But doing AI on such devices is no easy task. It’s more than just the size of the machine learning algorithms but the power it takes to execute them, especially since smartphones and especially IoT devices such as cameras and various sensors can’t depend on power from a wall socket or even batteries. “The devices will not scale if we become bound to changing or recharging batteries,” said Warden.

The radio connections needed to send data to and from the cloud also are energy hogs, so communicating via cellular or other connections is a deal breaker for many small, cheap devices. The result, said Yohann Tschudi, technology and market analyst at Yole Développement: “We need a dedicated architecture for what we want to do.”

There’s also a need to develop devices that realistically must draw less than a milliwatt, and that’s about a thousandth of what a smartphone uses. The good news is that an increasing array of sensors and even microprocessors promises to do just that.

The U.S. Department of Energy, for instance, has helped develop low-cost wireless peel-and-stick sensors for building energy management in partnership with Molex Inc. and building automation firm SkyCentrics Inc. And experimental new image sensors can power themselves with ambient light.

And even microprocessors, the workhorses for computing, can be very low-power, such as those from startups such as Ambiq Micro, Eta Compute, Syntiant Corp., Applied Brain Research, Silicon Laboratories Inc. and GreenWaves Technologies.

“There’s no theoretical reason we can’t compute in microwatts,” or a thousand times smaller than milliwatts, Warden said. That’s partly because they can be programmed, for instance, to wake up a radio to talk to the cloud only when something actionable happens, like liquid spilling on a floor.

Embedded Vision Summit (Photo: Robert Hof/SiliconANGLE)

Embedded Vision Summit (Photo: Robert Hof/SiliconANGLE)

All this suggests a vast new array of applications of machine learning on everything from smartphones to smart cameras and factory monitoring sensors. Indeed, said Warden, “We’re getting so many product requests to run machine learning on embedded devices.”

Among those applications:

  • Predictive maintenance using accelerometers to determine if a machine is shaking too much or making a funny noise.
  • Presence detection for street lights so they turn on only when someone’s nearby.
  • Agricultural pest recognition using vision sensors or tiny cameras scattered throughout fields (below)
  • Illegal logging detection using old, solar-powered Android phones mounted on trees to hear chainsaws.
  • Medical devices to measure heart rate, insulin levels and body activity using sensors that could even be swallowed.
  • Voice separation using video (below).

Warden even anticipates that sensors could talk to each other, such as in a smart home where the smoke alarm detects a potential fire and the toaster replies that no, it’s just burned toast. That’s speculative for now, but Google’s already working on “federated learning” to train machine learning models without using centralized training data (below). 

federatedlearning

None of this means the cloud won’t continue to have a huge role in machine learning. All those examples involve running the models on devices, a process known as inference.

https://youtu.be/MD61bddZtbg

The training of the models, on the other hand, still involves processing massive amounts of data on powerful clusters of computers.

But it’s now apparent that the future of AI lies less in the cloud than at the edge.