Innovative AI Breath Analyzer Diagnoses Diseases by “Smell” – AI System to Detect 17 Diseases from Exhaled Breath with 86% Accuracy


AI csm_breathprint-system_2ed66eccfb

Re-Posted from Psychology Today: Author Cami Russo

Imagine being able to know if you have Parkinson’s disease, multiple sclerosis, liver failure, Crohn’s diseases, pulmonary hypertension, chronic kidney disease, or any number of cancers based on a simple, non-invasive test of your breath. Breath analyzers to detect alcohol have been around for well over half a century—why not apply the same concept to detect diseases? A global team of scientists from universities in Israel, France, Latvia, China and the United States have developed an artificial intelligence (AI) system to detect 17 diseases from exhaled breath with 86 percent accuracy.

The research team led by Professor Hassam Haick of the Technion-Israel Institute of Technology collected breath samples from 1404 subjects with either no disease (healthy control) or one of 17 different diseases. The disease conditions include lung cancer, colorectal cancer, head and neck cancer, ovarian cancer, bladder cancer, prostate cancer, kidney cancer, gastric cancer, Crohn’s disease, ulcerative colitis, irritable bowel syndrome, idiopathic Parkinson’s, atypical Parkinson ISM, multiple sclerosis, pulmonary hypertension, pre-eclampsia toxemia, and chronic kidney disease.

The concept is relatively simple—identify breath-prints of diseases, and compare it to human exhalation. What makes it complicated is the execution of the concept. For example, how to identify the breathprint of a disease? Is it unique like a fingerprint? To answer these questions requires a deeper look at the molecular composition of breath.

When we exhale, nitrogen, oxygen, carbon dioxide, argon, and water vapor are released. Human breath also contains volatile organic compounds (VOCs)–organic chemicals that are emitted as gases, and have a high vapor pressure at normal temperature. American biochemist Linus Pauling, one of the founders of modern quantum chemistry and molecular biology, and recipient of the 1954 Nobel Prize in Chemistry, and the 1962 Nobel Peace Prize, studied 250 human breath volatiles using a gas-liquid chromatogram in 1971. Pauling is widely regarded as a pioneer in modern breath analysis. Exhaled breath contains approximately over 3,500 components mostly comprised of VOCs in small quantities according to a 2011 study published in “Annals of Allergy, Asthma & Immunology.”

VOCs are the common factor in the smelling process for both breath analyzers and humans. When we inhale, the nose draws in odor molecules that typically contain volatile (easy to evaporate) chemicals. Once the odor molecules contact the olfactory epithelium tissue that lines the nasal cavity, it binds with the olfactory receptors and sends an electrical impulse to a spherical structure called the glomerulus in the olfactory bulb of the brain.

There are approximately 2,000 glomeruli near the surface of the olfactory bulb. Smell is the brain’s interpretation of the odorant patterns released from the glomerulus. The human nose can detect a trillion smells. In Haick’s researcher team, nanotechnology and machine learning replaces the biological brain in the smelling process.

Haick’s team of scientists developed a system, aptly called “NaNose,” that uses nanotechnology-based sensors trained to detect volatile organic compounds associated with select diseases in the study. NaNose has two layers. One is an inorganic nanolayer with nanotubes and gold nanoparticles for electrical conductivity. The other is an organic sensing layer with carbon that controls the electrical resistance of the inorganic layer based on the incoming VOCs. The electrical resistance changes depending on the VOCs.

Artificial intelligence (AI) is used to analyze the data. Specifically, deep learning is used to identify patterns in the data in order to match incoming signals with the chemical signature of specific diseases. The AI system was then trained on more than 8,000 patients in clinics with promising results—the system detected gastric cancer with 92-94 percent accuracy in a blinded test. The researchers discovered that “each disease has its own unique breathprint.”

Efforts are underway to miniaturize and commercialize the innovative technology developed by Haick’s team in a project called “SniffPhone.”  In November 2018, the European Commission’s Horizon 2020 awarded the SniffPhone the “2018 Innovation Award” for the “Most Innovative Project.”

The market opportunity for medical breath analyzers is expected to grow. By 2024, the breath analyzer market is projected to increase to USD 11.3 billion globally according to figures published in Jun 2018 by Grand View Research—alcohol detection has a majority of the revenue share. Currently breath analyzers are used to detect alcohol, drugs, and to diagnose asthma and gastroenteric conditions. Clinical applications are projected to increase due to the introduction of “introduction of advanced technologies to detect nitric oxide and carbon monoxide in breath,” Grand View Research states. According to the study, the medical application segment is expected to grow due to ability of breath analyzers to detect volatile organic compounds (VOCs) that may help in “early diagnosis of conditions including cardiopulmonary diseases and lung and breast cancer,” and act as “biomarkers to assess disease progressions.”

By applying cross-disciplinary innovative technologies from the fields of artificial intelligence, nanotechnology, and molecular chemistry, diagnosing a wide variety of diseases may be as simple and non-invasive as a breath analysis using a handheld device in the not-so-distant future.

 

Advertisements

New stable, transparent, and flexible electronic device that emulates essential synaptic behaviors, with potential for #AI in organic environments.


Structure and materials of the transparent and flexible synapses. a) Illustration of the identical bio-synapse and artificial synapse structures.

Waterproof artificial synapses for pattern recognition in organic environments

The two electrodes and the functional layer correspond to pre-synapse, post-synapse, and synaptic cleft, respectively. b) Schematic of the ITO/PEDOT:PSS/ITO flexible and transparent artificial synaptic device. c) Top and d) cross- sectional SEM images of the PEDOT:PSS film on the Si substrate. The film thickness was 42.18 nm. e) Schematic structure and f) Raman spectra of PEDOT:PSS. g) Transmittance spectrum of the PET/ITO, PET/ITO/PEDOT:PSS, and PET/ITO/PEDOT:PSS/ITO structures. h) AFM image (2×2 μm2) of the PEDOT:PSS film on the PET/ITO substrate. Root-mean-square average roughness (Rq) was 1.99 nm. Credit: Wang et al.

Most artificial intelligence (AI) systems try to replicate biological mechanisms and behaviors observed in nature. One key example of this is electronic synapses (e-synapses), which try to reproduce junctions between nerve cells that enable the transmission of electrical or chemical signals to target cells in the human body, known as synapses.

Over the past few years, researchers have simulated versatile functions using single physical devices. These devices could soon enable advanced learning and memory capabilities in machines, emulating functions of the human brain. 

Recent studies have proposed flexible, transparent and even bio-compatible electronic devices for pattern recognition, which could pave the way toward a new generation of wearable and  synaptic systems. These “invisible” e-synapses, however, come with a notable disadvantage: they easily dissolve in water or in organic solutions, which is far from ideal for wearable applications. 

To overcome this limitation, researchers at Fudan University in Shangai have set out to develop a new stable, flexible and waterproof synapse suitable for applications in organic environments. Their study, outlined in a paper published in the Royal Society of Chemistry’s Nanoscale Horizons journal, presents a new fully transparent electronic  that emulates essential synaptic behaviors, such as paired-pulse facilitation (PPF), long-term potentiation/depression (LTP/LTD) and learning-forgetting-relearning processes. 

“In the present work, a stable waterproof artificial synapse based on a fully transparent electronic device, suitable for wearable applications in an organic environment, is for the first time demonstrated,” the researchers wrote in their paper.

The flexible, fully transparent and waterproof device developed by the researchers has so far achieved remarkable results, with an optical transmittance of ~87.5 percent in the visible light range. It was also able to reliable replicate LTP/LTD processes under bended states. LTP/LTD are two processes affecting , which respectively entail an enhancement and decrease in synaptic strength. 

The researchers tested their synapses by immersing them in water and in five common organic solvents for over 12 hours. They found that they functioned with 6000 spikes without noticeable degradation. The researchers also used their e-synapses to develop a device-to-system-level simulation framework, which achieved a handwritten digit recognition accuracy of 92.4 percent. 

“The device demonstrated an excellent transparency of 87.5 percent at 550nm wavelength and flexibility at a radius of 5mm,” the researchers wrote in their paper. “Typical synaptic plasticity characteristics, including EPSC/IPSC, PPF and learning-forgetting-relearning processes, were emulated. Furthermore, the e-synapse exhibited reliable LTP/LTD behaviors at flat and bended states, even after being immersed in water and organic solvents for over 12 hours.” 

The device proposed by this team of researchers is the first “invisible” and waterproof e-synapse that can reliably operate in organic environments without any damage or deterioration. In the future, it could aid the development of new reliable brain-inspired neuromorphic systems, including  and implantable devices.

More information: Tian-Yu Wang et al. Fully transparent, flexible and waterproof synapses with pattern recognition in organic environments, Nanoscale Horizons (2019). DOI: 10.1039/C9NH00341J

© 2019 Science X Network

No cloud required: Why AI’s future is at the edge


For all the promise and peril of artificial intelligence, there’s one big obstacle to its seemingly relentless march:

The algorithms for running AI applications have been so big and complex that they’ve required processing on powerful machines in the cloud and data centers, making a wide swath of applications less useful on smartphones and other “edge” devices.

Now, that concern is quickly melting away, thanks to a series of breakthroughs in recent months in software, hardware and energy technologies. That’s likely to drive AI-driven products and services even further away from a dependence on powerful cloud-computing services and enable them to move into every part of our lives — even inside our bodies.

By 2022, 80% of smartphones shipped will have AI capabilities on the device itself, up from 10% in 2017, according to market researcher  Gartner Inc. And by 2023, that will add up to some 1.2 billion shipments of devices with on-device AI computing capabilities, up from 79 million in 2017, according to ABI Research.

evs19-aiapplicationprocessors

A lot of startups and their backers smell a big opportunity. According to Jeff Bier, founder of the Embedded Vision Alliance, which held a conference this past week in Silicon Valley, investors have plowed some $1.5 billion into new AI chip startups in the past three years — more than was invested in all chip startups in the previous three years.

Market researcher Yole Développement forecasts that AI application processors will enjoy a 46% compound annual growth rate through 2023, when nearly all smartphones will have them, from fewer than 20% today.

READ MORE:

AI: Real World Applications

And it’s not just startups. Just today, Intel Corp. previewed its coming Ice Lake chips, which among other things has “Deep Learning Boost” software and other new AI instructions on graphics processing unit.

“Within the next two years, virtually every processor vendor will be offering some kind of competitive platform for AI,” Tom Hackett, principal analyst at IHS Markit, said at the alliance’s Embedded Vision Summit. “We are now seeing a next-generation opportunity.”

hailo-8-vs-300x154

Those chips are finding their way into many more devices beyond smartphones. They’re also being used in millions of “internet of things” devices such as robots, drones, cars, cameras and wearables.

Among the 75 or so companies developing machine learning chips, for instance, is Israel’s Hailo, which raised a $21 million funding round in January. In mid-May it released a processor that’s tuned for deep learning, a branch of machine learning responsible for recent breakthroughs in voice and image recognition.

More compact and capable software is paving the way for AI at the edge as well. Google LLC, for instance debuted its TensorFlow Litemachine learning library for mobile devices in late 2017, enabling the potential for smart cameras to can identify wildlife or imaging devices to can make medical diagnoses even where there’s no internet connection.

Some 2 billion mobiles now have TensorFlow Lite deployed on them, Google staff research engineer Pete Warden said at a keynote presentation at the Embedded Vision Summit.

And in March, Google rolled out an on-device speech recognizer to power speech input in Gboard, Google’s virtual keyboard app. The automatic speech recognition transcription algorithm is now down to 80 megabytes so it can run on the Arm Ltd. A-series chip inside a typical Pixel phone, and that means it works offline so there’s no network latency or spottiness.

Not least, rapidly rising privacy concerns about data traversing the cloud means there’s also a regulatory reason to avoid moving data off the devices.

“Virtually all the machine learning processing will be done on the device,” said Bier, who’s also co-founder and president of Berkeley Design Technology Inc., which provides analysis and engineering services for embedded digital signal processing technology. And there will be a whole lot of devices: Warden cited an estimate of 250 billion active embedded devices in the world today, and that number is growing 20% a year.

Google's Pete Warden (Photo: Robert Hof/SiliconANGLE)

Google’s Pete Warden (Photo: Robert Hof/SiliconANGLE)

But doing AI on such devices is no easy task. It’s more than just the size of the machine learning algorithms but the power it takes to execute them, especially since smartphones and especially IoT devices such as cameras and various sensors can’t depend on power from a wall socket or even batteries. “The devices will not scale if we become bound to changing or recharging batteries,” said Warden.

The radio connections needed to send data to and from the cloud also are energy hogs, so communicating via cellular or other connections is a deal breaker for many small, cheap devices. The result, said Yohann Tschudi, technology and market analyst at Yole Développement: “We need a dedicated architecture for what we want to do.”

There’s also a need to develop devices that realistically must draw less than a milliwatt, and that’s about a thousandth of what a smartphone uses. The good news is that an increasing array of sensors and even microprocessors promises to do just that.

The U.S. Department of Energy, for instance, has helped develop low-cost wireless peel-and-stick sensors for building energy management in partnership with Molex Inc. and building automation firm SkyCentrics Inc. And experimental new image sensors can power themselves with ambient light.

And even microprocessors, the workhorses for computing, can be very low-power, such as those from startups such as Ambiq Micro, Eta Compute, Syntiant Corp., Applied Brain Research, Silicon Laboratories Inc. and GreenWaves Technologies.

“There’s no theoretical reason we can’t compute in microwatts,” or a thousand times smaller than milliwatts, Warden said. That’s partly because they can be programmed, for instance, to wake up a radio to talk to the cloud only when something actionable happens, like liquid spilling on a floor.

Embedded Vision Summit (Photo: Robert Hof/SiliconANGLE)

Embedded Vision Summit (Photo: Robert Hof/SiliconANGLE)

All this suggests a vast new array of applications of machine learning on everything from smartphones to smart cameras and factory monitoring sensors. Indeed, said Warden, “We’re getting so many product requests to run machine learning on embedded devices.”

Among those applications:

  • Predictive maintenance using accelerometers to determine if a machine is shaking too much or making a funny noise.
  • Presence detection for street lights so they turn on only when someone’s nearby.
  • Agricultural pest recognition using vision sensors or tiny cameras scattered throughout fields (below)
  • Illegal logging detection using old, solar-powered Android phones mounted on trees to hear chainsaws.
  • Medical devices to measure heart rate, insulin levels and body activity using sensors that could even be swallowed.
  • Voice separation using video (below).

Warden even anticipates that sensors could talk to each other, such as in a smart home where the smoke alarm detects a potential fire and the toaster replies that no, it’s just burned toast. That’s speculative for now, but Google’s already working on “federated learning” to train machine learning models without using centralized training data (below). 

federatedlearning

None of this means the cloud won’t continue to have a huge role in machine learning. All those examples involve running the models on devices, a process known as inference.

https://youtu.be/MD61bddZtbg

The training of the models, on the other hand, still involves processing massive amounts of data on powerful clusters of computers.

But it’s now apparent that the future of AI lies less in the cloud than at the edge.


‘Nano-Spidey senses’ could help autonomous machines (EV’s, Drones) see better


spideysenses (1)Researchers are building spider-inspired sensors into the shells of autonomous drones and cars so that they can detect objects better. Credit: Taylor Callery

What if drones and self-driving cars had the tingling “spidey senses” of Spider-Man?

They might actually detect and avoid objects better, says Andres Arrieta, an assistant professor of mechanical engineering at Purdue University, because they would process  faster.

Better sensing capabilities would make it possible for drones to navigate in dangerous environments and for cars to prevent accidents caused by human error. Current state-of-the-art sensor technology doesn’t process data fast enough—but nature does.

And researchers wouldn’t have to create a radioactive spider to give autonomous machines superhero sensing abilities.

Instead, Purdue researchers have built  inspired by spiders, bats, birds and other animals, whose actual spidey senses are  linked to special neurons called mechanoreceptors.

The nerve endings—mechanosensors—only detect and process information essential to an animal’s survival. They come in the form of hair, cilia or feathers.

“There is already an explosion of data that  can collect—and this rate is increasing faster than what conventional computing would be able to process,” said Arrieta, whose lab applies principles of nature to the design of structures, ranging from robots to aircraft wings.

“Nature doesn’t have to collect every piece of data; it filters out what it needs,” he said.

Many biological mechanosensors filter data—the information they receive from an environment—according to a threshold, such as changes in pressure or temperature.

'Spidey senses' could help autonomous machines see better
In nature, ‘spidey-senses’ are activated by a force associated with an approaching object. Researchers are giving autonomous machines the same ability through sensors that change shape when prompted by a predetermined level of force. Credit: ETH Zürich images/Hortense Le Ferrand

A spider’s hairy mechanosensors, for example, are located on its legs. When a spider’s web vibrates at a frequency associated with prey or a mate, the mechanosensors detect it, generating a reflex in the spider that then reacts very quickly. The mechanosensors wouldn’t detect a lower frequency, such as that of dust on the web, because it’s unimportant to the spider’s survival.

The idea would be to integrate similar sensors straight into the shell of an autonomous machine, such as an airplane wing or the body of a car. The researchers demonstrated in a paper published in ACS Nano that engineered mechanosensors inspired by the hairs of spiders could be customized to detect predetermined forces. In real life, these forces would be associated with a certain object that an autonomous machine needs to avoid.

But the sensors they developed don’t just sense and filter at a very fast rate—they also compute, and without needing a power supply.

“There’s no distinction between hardware and software in nature; it’s all interconnected,” Arrieta said. “A sensor is meant to interpret data, as well as collect and filter it.”

In nature, once a particular level of force activates the mechanoreceptors associated with the hairy mechanosensor, these mechanoreceptors compute information by switching from one state to another.

Purdue researchers, in collaboration with Nanyang Technology University in Singapore and ETH Zürich, designed their sensors to do the same, and to use these on/off states to interpret signals. An intelligent machine would then react according to what these sensors compute.

These artificial mechanosensors are capable of sensing, filtering and computing very quickly because they are stiff, Arrieta said. The sensor material is designed to rapidly change shape when activated by an external force. Changing shape makes conductive particles within the material move closer to each other, which then allows electricity to flow through the sensor and carry a signal. This signal informs how the autonomous system should respond.

“With the help of machine learning algorithms, we could train these sensors to function autonomously with minimum energy consumption,” Arrieta said. “There are also no barriers to manufacturing these sensors to be in a variety of sizes.”


Explore further

Engineers create new design for ultra-thin capacitive sensors


More information: Hortense Le Ferrand et al, Filtered Mechanosensing Using Snapping Composites with Embedded Mechano-Electrical Transduction, ACS Nano (2019). DOI: 10.1021/acsnano.9b01095

Journal information: ACS Nano
Provided by Purdue University

MIT’s 10 Breakthrough Technologies for 2019 – Introduction by Bill Gates: Part I


In this Two (2) Part Re-Post from MIT Technology Review 10 Breakthrough Technologies for 2019. Guest Curator Bill Gates has been asked to choose this year’s list of inventions that will change the world for the better.

Part I: Bill Gates: How we’ll Invent the Future

was honored when MIT Technology Review invited me to be the first guest curator of its 10 Breakthrough Technologies. Narrowing down the list was difficult. I wanted to choose things that not only will create headlines in 2019 but captured this moment in technological history—which got me thinking about how innovation has evolved over time.

My mind went to—of all things—the plow. Plows are an excellent embodiment of the history of innovation. Humans have been using them since 4000 BCE, when Mesopotamian farmers aerated soil with sharpened sticks. We’ve been slowly tinkering with and improving them ever since, and today’s plows are technological marvels.

 

But what exactly is the purpose of a plow? It’s a tool that creates more: more seeds planted, more crops harvested, more food to go around. In places where nutrition is hard to come by, it’s no exaggeration to say that a plow gives people more years of life. The plow—like many technologies, both ancient and modern—is about creating more of something and doing it more efficiently, so that more people can benefit.

Contrast that with lab-grown meat, one of the innovations I picked for this year’s 10 Breakthrough Technologies list. Growing animal protein in a lab isn’t about feeding more people. There’s enough livestock to feed the world already, even as demand for meat goes up. Next-generation protein isn’t about creating more—it’s about making meat better. It lets us provide for a growing and wealthier world without contributing to deforestation or emitting methane. It also allows us to enjoy hamburgers without killing any animals.

Put another way, the plow improves our quantity of life, and lab-grown meat improves our quality of life. For most of human history, we’ve put most of our innovative capacity into the former. And our efforts have paid off: worldwide life expectancy rose from 34 years in 1913 to 60 in 1973 and has reached 71 today.

Because we’re living longer, our focus is starting to shift toward well-being. This transformation is happening slowly. If you divide scientific breakthroughs into these two categories—things that improve quantity of life and things that improve quality of life—the 2009 list looks not so different from this year’s. Like most forms of progress, the change is so gradual that it’s hard to perceive. It’s a matter of decades, not years—and I believe we’re only at the midpoint of the transition.

To be clear, I don’t think humanity will stop trying to extend life spans anytime soon. We’re still far from a world where everyone everywhere lives to old age in perfect health, and it’s going to take a lot of innovation to get us there. Plus, “quantity of life” and “quality of life” are not mutually exclusive. A malaria vaccine would both save lives and make life better for children who might otherwise have been left with developmental delays from the disease.

We’ve reached a point where we’re tackling both ideas at once, and that’s what makes this moment in history so interesting. If I had to predict what this list will look like a few years from now, I’d bet technologies that alleviate chronic disease will be a big theme. This won’t just include new drugs (although I would love to see new treatments for diseases like Alzheimer’s on the list). The innovations might look like a mechanical glove that helps a person with arthritis maintain flexibility, or an app that connects people experiencing major depressive episodes with the help they need.

If we could look even further out—let’s say the list 20 years from now—I would hope to see technologies that center almost entirely on well-being. I think the brilliant minds of the future will focus on more metaphysical questions: How do we make people happier? How do we create meaningful connections? How do we help everyone live a fulfilling life?

I would love to see these questions shape the 2039 list, because it would mean that we’ve successfully fought back disease (and dealt with climate change). I can’t imagine a greater sign of progress than that. For now, though, the innovations driving change are a mix of things that extend life and things that make it better. My picks reflect both. Each one gives me a different reason to be optimistic for the future, and I hope they inspire you, too.

My selections include amazing new tools that will one day save lives, from simple blood tests that predict premature birth to toilets that destroy deadly pathogens. I’m equally excited by how other technologies on the list will improve our lives. Wearable health monitors like the wrist-based ECG will warn heart patients of impending problems, while others let diabetics not only track glucose levels but manage their disease. Advanced nuclear reactors could provide carbon-free, safe, secure energy to the world.

One of my choices even offers us a peek at a future where society’s primary goal is personal fulfillment. Among many other applications, AI-driven personal agents might one day make your e-mail in-box more manageable—something that sounds trivial until you consider what possibilities open up when you have more free time.

The 30 minutes you used to spend reading e-mail could be spent doing other things. I know some people would use that time to get more work done—but I hope most would use it for pursuits like connecting with a friend over coffee, helping your child with homework, or even volunteering in your community.

That, I think, is a future worth working toward.

MIT Nuclear 2 c-mod-internal-1

 

You can read Part II Here

Saving Us From AI’s Worst Case Scenarios – An Interview with MIT Professor Max Tegmark


artificial-intelligence-people-mit-00_2

(AI) “… Instead, the largest threat would be if it turns extremely competent. This is because the competent goals may not be aligned with our goals either because it is controlled by someone who does not share our goals, or because the machine itself has power over us.”

Artificial intelligence (AI) is one of the hottest trends pursued by the private sector, academics, and government institutions. The promise of AI is to make our lives better: to have an electronic brain to complement our own, to take over menial tasks so that we can focus on higher value activities, to allow us to make better decisions in our personal and professional lives.

There is also a darker side to AI that many fear. What happens when bad actors leverage AI for bad uses? How will we ensure that AI is not a wedge to divide the haves and have-nots further apart? Moreover, what happens when our jobs are fundamentally changed or go away when we derive so much of what defines us from what we do professionally?

Max Tegmark has studied these issues intimately from his perch as a professor at MIT and as the  co-founder of the Future of Life Institute. He has synthesized his own thoughts into a powerful book called Life 3.0: Being Human in the Age of Artificial Intelligence. As the title suggests, AI will redefine what it means to be human due to the scale of the changes it will bring about.

                                                                     

Tegmark likes the analogy of the automobile to make the case for what is necessary for AI to be beneficial for humanity. He notes that the three things that are necessary are that it have an engine (the power to create value), it needs steering (so that it can be moved toward good rather than evil ends), and it must have direction or a roadmap for how to get to the beneficial destination. He notes that “the way    to create a good future with technology is to continuously win the wisdom race. As technology grows more powerful, the wisdom in which we manage it must keep up.” He describes all of this and more in this interview.  professor mark tegmark https___blogs-images.forbes.com_peterhigh_files_2019_01_maxresdefault-300x169

MIT Professor and Author, Max Tegmark CREDIT: MIT 

(To listen to an unabridged podcast version of this interview, please click this link. This is the 31st interview in the Tech Influencers series. To listen to past interviews with the likes of former Mexican President Vicente Fox, Sal Khan, Sebastian Thrun, Steve Case, Craig Newmark, Stewart Butterfield, and Meg Whitman, please visit this link. To read future articles in this series, please follow me on on Twitter @PeterAHigh.)

The Interview by Peter High

Peter High is President of Metis Strategy, a business and IT advisory firm. His latest book is Implementing World Class IT Strategy. He is also the author of World Class IT: Why Businesses Succeed When IT Triumphs.

 

Peter High: Congratulations on your book, Life 3.0: Being Human in the Age of Artificial Intelligence. When and where did your interest in the topic of Artificial Intelligence [AI] begin?

High: When you have described your efforts to figure out where AI might take us, you make an analogy to driving a car. First, you need the engine and the power to make AI work. Second, you need steering because AI must be steered in one direction or another. Lastly, there needs to be a destination. Can you elaborate on each of those topics, and could you give us your hypothesis as to where we are heading?

Tegmark: If you are building a rocket or a car, it would be nuts to exclusively focus on the engine’s power while ignoring how to steer it. Even if you have the steering sorted out, you are going to have trouble if you are unable to determine where you are trying to go with it. Unfortunately, I believe this is what we are doing as we continue to build more powerful technology, especially with AI. To be as ambitious as possible, we need to think about all three elements, which are the power, the steering, and the destination of the technology.

Because it is so important, I spend a great deal of time at MIT focused on steering. Along with Jaan Tallinn and several other colleagues, I co-founded the Future of Life Institute, which [focuses on] the destination. While we are making AI more powerful, it is critical to know what type of society we are aspiring to create with this technology. If society accomplishes the original goal of AI research, which is to make so-called “Artificial General Intelligence” [AGI] that can do all jobs better than humans, we have to determine what it will mean to be a human in the future. I am convinced that if we succeed, it will either be the best or the worst advancement ever, and it will come down to the amount of planning we do now. If we have no clue about where we want to go, it is unlikely that we are going to end up in an ideal situation. However, if we plan accordingly and steer technology in the right direction, we can create an inspiring future that will allow humanity to flourish in a way that we have never seen before.

I believe this to be true because the reason that today’s society is better than the Stone Age is because of technology. Everything I love about civilization is the product of intelligence. Technology is the reason why the life expectancy is no longer 32 years. If we can take this further and amplify our intelligence with AI, we have the potential to solve humanity’s greatest challenges. These technologies can help us cure other diseases that we are currently told are incurable because we have not been smart enough to solve them. Further, technology can lift everybody out of poverty, solve the issues in our climate, and allow us to go in inspiring directions that we have not even thought of yet. It is clear that there is an enormous upside if we get this right, and that is why I am incredibly motivated to work on that.

High: I am struck by the caveman analogy. We are so far removed from cavemen and cavewomen that a modern human and caveman would not be able to recognize each other in terms of life expectancy, the ability to communicate, and the time we have to reflect and ponder our situation, among other differences.

Tegmark: That is so true, and you said something super interesting there. While we are so far removed, we are largely stuck in the caveman mindset. When we were cavemen, the most powerful technology we had were rocks and sticks, which limited our ancestors’ ability to cause significant damage. While there were always cavemen that wanted to harm as many people as possible, there was only so much damage one could do with a rock and a stick.

Unfortunately, with nuclear weapons, the damage can be devastating, and as technology gets more powerful, it becomes easier to mess up. However, at the same time, we now have more power to use technology for good. Because of both of these factors, the more powerful the technology gets, the more important the steering becomes. Technology is neither good nor evil, so when people ask me if I am for AI or against AI, I ask them if they are for fire or against fire. Fire can obviously be used to keep your house warm in the winter, or it can be used for arson. To keep this under control, we have put a great deal of effort into the steering of fire. We have fire extinguishers and fire departments, and we created ways to punish people who use fire in ways that are not appropriate.

We have to step out of our caveman mindset. The way to create a good future with technology is to continuously win the wisdom race. As technology grows more powerful, the wisdom in which we manage it must keep up. This was true with fire and with the automobile engine, and I believe we were successful in those missions. While we continuously messed up, we learned from our mistakes and invented the seat belt, the airbag, traffic lights, and laws against speeding. Ever since we were cavemen, we have been able to stay ahead in the wisdom race by learning from our mistakes. However, as technology gets more powerful, the margin for error is evaporating, and one mistake in the future may be one too many. We obviously do not want to have an “accidental” nuclear war with Russia and just brush it off as a mistake that we can learn from and be more careful of the next time. It is far more effective to be proactive and plan ahead, rather than reactive. I believe we need to implement this mindset before we build technology that can do everything better than us.

High: You mentioned there are some attributes that we still share with our distant ancestors. Even if AGI does not come for decades, the change will be almost the same in magnitude as the change from cavemen to the present day. For example, it potentially has the power to change the way in which we work. You have written persuasively about the possibility of what we do being taken over by AI. In a society where many of us are defined by the work that we do, it is quite unsettling to know that, what I love about my day job today will be done better by AI. We may need to redefine ourselves as a result. What are your perspectives on that?

Tegmark: I agree with that, and I would take it a step further and say that the jump from today to AGI is a bigger one than the jump from cavemen to the present day. When we were cavemen, we were the smartest species on the planet, and we still are today. With AGI, we will not be, which is a huge game changer. While we have doubled our life expectancy and seen new technologies emerge, we are still stuck on this tiny planet, and the majority of people still die within a century. However, if we can build AGI, the opportunities will be limitless.

People are not realizing this, and because we are still stuck in this caveman mindset, we continue to think that it will take us thousands of years to find a way to live 200, or even 1,000 years. Moreover, the mindset that we have to invent all the technologies ourselves has led us to believe that it will take thousands of years to move to another solar system. However, this is far from true because, by definition, AGI has the ability to do all jobs better than us, including jobs that can invent better AI among other technologies. This capability has led many to believe that AGI could be the last invention that we need to make. We may end up with a future where life on Earth and beyond flourishes for billions of years, not just for the next election cycle. This could all start on Earth if we can solve intelligence and use it to go in amazing directions. If we get this right, the upside will be far more significant than the benefits we reaped going from cavemen to the present day.

Regarding what it means to be a human if all jobs can be done better by machines, that is why the subtitle of my book is, Being Human in the Age of Artificial Intelligence. Jobs do not just give us an income, they give us meaning and a sense of purpose in our lives. Even if we can produce all that we need with machines and figure out how to share the wealth, it does not solve the question of how that purpose and meaning will be replaced. This crucial dilemma absolutely cannot be left to tech nerds such as myself because AI programmers are not world experts on what makes humans happy. We need to broaden this conversation to get everyone on board and discuss what type of future we want to create. This is essential, and unfortunately, I do not believe that we are going about this the right way.

Students often walk into my office asking for career advice, and in response, I always start by asking them about where they want to be in the future. If all the student can say is that they may get cancer, be murdered, or run over by a truck, that is a terrible strategy for career planning. I want these people to come in with fire in their eyes and say, “This is where I want to be.” From there, we can figure out what the challenges are and come up with a strong strategy to avoid them so that they can get to where they want to be. While we should take this same approach as a species, it is not the one we are taking. Every time I go to the movies and see something about the future, it showcases one dystopia after another. This approach makes us paranoid, and it divides us in the same way that fear always has. It is crucial for us to have a conversation around the type of futures we are excited about. I am not talking about getting 10 percent richer or curing a minor disease, but I want people to think big. If machines can do everything with technology, what kind of future would fire us up? What type of society do we want to live in? What would your typical day look like? If we can articulate a shared, positive vision that catches on around the world, I believe we have a real chance of getting there.

High: What happens if AGI gets to the point where the work that you are doing at MIT and at the Future of Life Institute is no longer meaningful?

Tegmark: That is a hard-hitting question. I get an incredible amount of joy from figuring stuff out, and if I could just press a button and the computer would write my papers for me, would it be as much fun? This is not an easy topic.

In my book, I discuss twelve different futures that people can choose between. Just because we can think about a future that we are convinced is perfect, does not mean that we should do nothing. At a minimum, we should do the necessary thinking that will allow us to steer our future in the right direction. There are some obvious decisions that need to be made now, such as how income inequality will be handled. While we may be able to dramatically grow the overall world GDP, we must be able to share this economic pie so that everybody is better off. As more and more jobs get replaced by machines, incomes that have typically been paid in salaries will go towards whoever owns the machines. This concept is why Facebook, a high-tech company, is twelve times more valuable than Ford, despite the fact that it has eight times fewer employees. Unfortunately, we have not begun to make these decisions, and if we are unable to do so to the point where everyone benefits, then shame on us. As companies become more high-tech, we must make twists to the system to avoid leaving more people behind and ending up with far more income inequality. If this problem does not get solved, we will end up with more and more angry people, which will make democracy more and more unworkable. However, on the bright side, all that wealth makes this problem relatively easy to fix. All that needs to be done is to bring in enough tax revenue so that everyone can be better off.

The second aspect, which I believe is a no-brainer, is that we must ensure that we avoid a damaging arms race with the lethal autonomous weapons. Fortunately, nearly all the research in AI is going towards helping people in various ways, and most AI researchers want to keep it that way. Around the time I was born, we were on the cusp of a horrible arms race with bioweapons. When this happened, the biologists pushed hard to get an international ban on bioweapons, and as a result, most people cannot remember the last time they read about a bioweapon terrorist attack in the newspaper. If you ask a hundred random people on the street about their opinions on biology, they are all going to smile and associate it with new cures, rather than with bioweapons. It is critical that we handle AI weapons in a similar way.

We need to put a greater focus on the steering aspect of AI. Nearly all of the funding going into AI has been around making it more powerful, and little is going towards AI safety research. Even increasing this a little bit will make an impactful difference. As we put AI in charge of more infrastructure-level decisions, we must transform buggy and hackable computers into robust AI systems that can be trusted. If we fail to do so, all these fascinating new technologies can malfunction, harm us, or be hacked and used against us.ai-davenport-artificial-intelligence-pilots-innovators-early-adopters-implementation-industry-production-1200

As AI becomes more and more capable, we have to work on the value alignment problems of AI. The real threat with AGI is not that it is going to turn evil in the way that it does in the silly Hollywood movies. Instead, the largest threat would be if it turns extremely competent. This is because the competent goals may not be aligned with our goals either because it is controlled by someone who does not share our goals, or because the machine itself has power over us. We must solve some tough technical challenges in order to neutralize this threat. We have to figure out how to make machines understand our goals, adopt our goals, and then keep these goals if they get smarter. Although work has begun in this area, these problems are hard, and it may take roughly 30 years to solve them. It is absolutely critical that we focus on this problem now so that we have the answers by the time we need them. We have to stop looking at these issues as an afterthought.

High: What role do private sector, academic, and governmental institutions play? Each is exerting influence in their own ways, and they are progressing at different rates. How do you see that balance?

mit tegmark bkawytw4szvjjak4s5xa8e-320-80Tegmark: Academia is great for developing solutions to AI safety problems while making them publicly available so that everyone in the world can use them. You want safety solutions to be free because if someone owns the IP on them, it will cause a worse outcome.

I believe private companies have mostly played a constructive role in helping encourage the safety work around AI. For example, most of the big players in AI, such as Google, IBM, Microsoft, Facebook, and many international companies, have joined together in an AI partnership to encourage safety development.

On the flip side, governments need to step it up and provide more funding for the safety research. No government should fund nuclear reactor research without funding reactor safety research. Similarly, no country should fund computer science research without putting a decent slice towards the steering part.

That is my wish list as to what we should focus on in the current day to maximize the chances of this going well. In parallel, everyone else needs to ask themselves what future they want to see. They should remember that the next time they vote and whenever they exert influence, we want to create a future for everybody.

High: How do you keep up with the progress or lack thereof of these advances?

Tegmark: Both through the research taking place at MIT and through the nerdy AI conferences that I go to. Additionally, the non-profit work that I have been doing has been fascinating. I have spent a great deal of time speaking with top researchers and CEOs who are making incredible progress on this. I am encouraged, and I find that the leaders are mostly an idealistic bunch. I do not believe that they are doing this exclusively for the money. Instead, they want this technology to represent an opportunity to create a better future. We need to make sure that the society at large shares this goal of channeling AI for good, instead of using it to hack elections and create new ways to murder people anonymously. That would be an incredibly sad result of all these good intentions.

Peter High is President of Metis Strategy, a business and IT advisory firm. His latest book is Implementing World Class IT Strategy. He is also the author of World Class IT: Why Businesses Succeed When IT Triumphs. Peter moderates the Forum on World Class IT podcast series. He speaks at conferences around the world. Follow him on Twitter @PeterAHigh.

 

I am the president of Metis Strategy, a business and IT strategy firm that I founded in 2001. I have advised many of the best chief information officers at multi-billion dollar corporations in the United States and abroad. I’ve written for the Wall Street Journal, CIO Magazi… MORE

Four Emerging Technology Areas That Will Help Define Our World In 2019


Welcome to 2019....

2018 was surely a transformative year for technological innovation. We saw early development of ambient computing, quantum teleportation, cloaks of invisibility, genomics advancements and even robocops.

Granted we’re not flying around in our own cars like the Jetsons did yet, but we’re closer. In 2019 we will continue on the transformation path and expand even more into adopting cutting edge immersive technologies.

What’s ahead for the coming year? I envision four emerging technology areas that will significantly impact our lives in 2019.

1.  The Internet of Things and Smart Cities

The Internet of Things (IoT) refers to the general idea of devices and equipment that are readable, recognizable, locatable, addressable, and/or controllable via the internet. 

This includes everything from home appliances, wearable technology and cars. These days, if a device can be turned on, it most likely can be connected to the internet. Because of this, data can be shared quickly across a multitude of objects and devices increasing the rate of communications.

Cisco, who terms the “Internet of Things,” “The Internet of Everything,” predicts that 50 billion devices (including our smartphones, appliances and office equipment) will be wirelessly connected via a network of sensors to the internet by 2020.

The term “Smart City” connotes creating a public/private infrastructure to conduct activities that protect and secure citizens. The concept of Smart Cities integrates communications (5-G), transportation, energy, water resources, waste collections, smart-building technologies, and security technologies and services. They are the cities of the future.

IoT is the cog of Smart Cities that integrates these resources, technologies, services and infrastructure.

The research firm Frost & Sullivan estimates the combined global market potential of Smart City segments (transportation, healthcare, building, infrastructure, energy and governance) to be $1.5 Trillion ($20B by 2050 on sensors alone according to Navigant Technology).

The combined growth of IoT and Smart Cities will be a force to reckon with in 2019!

     2.  Artificial Intelligence (AI)

Emergent artificial intelligence (AI), machine learning, human-computer interface, and augmented reality technologies are no longer science fiction. Head-spinning technological advances allow us to gain greater data-driven insights than ever before.

The ethical debate about AI is fervent over the threatening implications of future technologies that can think like a human (or better) and make their own decisions. The creation of a “Hal” type entity as depicted in Stanley Kubrick’s film, 2001 A Space Odyssey, is not far-fetched.

To truly leverage our ability to use data driven insights we need to make sure our thinking about how to best use this data keeps pace with its availability.

The vast majority of digital data is unstructured: a complex mesh of images, texts, videos and other data formats. Estimates suggest 80-90 percent of the world’s data is unstructured and growing at an increasingly rapid rate each day.

To even begin to make sense of this much data, advanced technologies are required. Artificial intelligence is the means by which this data is processed today, and it’s already a part of your everyday life.

In 2019, companies and governments will continue to develop technology that distributes artificial intelligence and machine learning software to millions of graphics and computer processors around the world. The question is how far away are we from a “Hal” with the ability for human analysis and techno emotions? 

     3.  Quantum Computing

The world of computing has witnessed seismic advancements since the invention of the electronic calculator in the 1960s. The past few years in information processing have been especially transformational.

What were once thought of as science fiction fantasies are now technological realities. Classical computing has become more exponentially faster and more capable and our enabling devices smaller and more adaptable.

We are starting to evolve beyond classical computing into a new data era called quantum computing. It is envisioned that quantum computing will accelerate us into the future by impacting the landscape of artificial intelligence and data analytics.

The quantum computing power and speed will help us solve some of the biggest and most complex challenges we face as humans.

Gartner describes quantum computing as: “[T]he use of atomic quantum states to effect computation. Data is held in qubits (quantum bits), which have the ability to hold all possible states simultaneously. Data held in qubits is affected by data held in other qubits, even when physically separated.

This effect is known as entanglement.” In a simplified description, quantum computers use quantum bits or qubits instead of using binary traditional bits of ones and zeros for digital communications.

Futurist Ray Kurzweil said that mankind will be able to “expand the scope of our intelligence a billion-fold” and that “the power of computing doubles, on average, every two years.” Recent breakthroughs in physics, nanotechnology and materials science have brought us into a computing reality that we could not have imagined a decade ago.

As we get closer to a fully operational quantum computer, a new world of supercomputing beckons that will impact on almost every aspect of our lives. In 2019 we are inching closer.

     4.  Cybersecurity (and Risk Management)

Many corporations, organizations and agencies have continued to be breached throughout 2018 despite cybersecurity investments on information assurance. The cyber threats grow more sophisticated and deadly with each passing year. The firm Gemalto estimated that data breaches compromised 4.5 billion records in first half of 2018. And a University of Maryland study found that hackers now attack computers every 39 seconds.

In 2019 we will be facing a new and more sophisticated array of physical security and cybersecurity challenges (including automated hacker tools) that pose significant risk to people, places and commercial networks.

The nefarious global threat actors are terrorists, criminals, hackers, organized crime, malicious individuals, and in some cases, adversarial nation states.

The physical has merged with the digital in the cybersecurity ecosystem. The more digitally interconnected we become in our work and personal lives, the more vulnerable we will become. Now everyone and anything connected is a target.

Cybersecurity is the digital glue that keeps IoT, Smart Cities, and our world of converged machines, sensors, applications and algorithms operational.

Addressing the 2019 cyber-threat also requires incorporating a better and more calculated risk awareness and management security strategy by both the public and private sectors. A 2019 cybersecurity risk management strategy will need to be comprehensive, adaptive and elevated to the C-Suite. 

I have just touched on a few of the implications of four emerging technology areas that will have significant impact in our lives in 2019.

These areas are just the tip of the iceberg as we really are in the midst of a paradigm shift in applied scientific knowledge.  We have entered a new renaissance of accelerated technological development that is exponentially transforming our civilization.

Yet with these benefits come risks. With such catalyzing innovation, we cannot afford to lose control. The real imperative for this new year is for planning and systematic integration.  

Hopefully that will provide us with a guiding technological framework that will keep us prosperous and safe.

Article by Chuck Brooks Special to Forbes Magazine
Chuck Brooks is an Advisor and Contributor to Cognitive World. In his full time role he is the Principal Market Growth Strategist for General Dynamics Mission Systems…MORE

AI and Nanotechnology Team Up to bring Humans to the brink of IMMORTALITY, top scientist claims


IMMORTAL: Human beings could soon live forever 

HUMAN beings becoming immortal is a step closer following the launch of a new start-up.

Dr Ian Pearson has previously said people will have the ability to “not die” by 2050 – just over 30 years from now.

Two of the methods he said humans might use were “body part renewal” and linking bodies with machines so that people are living their lives through an android.

But after Dr Pearson’s predictions, immortality may now be a step nearer following the launch of a new start-up.

Human is hoping to make the immortality dream a reality with an ambitious plan.

Josh Bocanegra, the CEO of the company, said he is hoping to use Artificial Intelligence technology to create its own human being in the next three decades.

He said: “We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioural patterns, thought processes and information about how your body functions from the inside-out.

Watch

Live to 2050 and “Live Forever” Really?

“This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human.

“Using cloning technology, we will restore the brain as it matures.” 

Last year, UK-based stem cell bank StemProject said it could eventually potentially develop treatments that allow humans to live until 200.

Mark Hall, from StemProtect, said at the time: “In just the same way as we might replace a joint such as a hip with a specially made synthetic device, we can now replace cells in the body with new cells which are healthy and younger versions of the ones they’re replacing.

“That means we can replace diseased or ageing cells – and parts of the body – with entirely new ones which are completely natural and healthy.”

Watch Dr. Ian Pearson Talk About the Possibility of Immortality by 2050

7 Emerging Technologies that are Changing Mission Critical Processes: IoT .. AI .. AR and


An article by Jorge Sagastume, Vice President at EscrowTech International, Inc.

Sometimes, even the simplest of processes can be critical to the continued day-to-day operation of your business. That’s why businesses should be taking a proactive approach to enhancing their processes by making use of the latest technologies available that could facilitate business process management.

Here are 7 technologies — including Blockchain technology and the Internet of Things — that are already demonstrating how they have the potential to completely overhaul existing critical processes:

 

1. Blockchain

Your data is important. In fact, it’s crucial to your vital business processes. Your data can tell you what you need to do, how you need to do it, and when it needs to be done. So what happens if that data is inaccurate, or is tampered with through either internal or external sources? Process failure. That’s where Blockchain technology comes in. The ‘Blockchain’ is a database or ledger that records transactions, activity, or behaviors automatically without the need for human input. It cannot be altered, changed, or amended manually, significantly boosting the accuracy, security, and efficiency of your critical processes. Most commonly associated with cryptocurrency, Blockchain can be used in practically any industry.

2. Internet of Things

Mission critical processes are essential for the continued smooth running of a business, but an ongoing concern with these vital processes is that they can be challenging to analyze and review to ensure they’re the most efficient, effective, and productive processes that the business could be using. That’s why many businesses are looking into the Internet of Things, or IoT. IoT is the concept of interconnected devices; one talks to another, to another, and so on as necessary. These devices can also be set up to operate on an ‘if x, then x’ schedule. In terms of mission-critical processes, connected devices can be used to gather data from multiple areas to comprehensively monitor and record how you work.

3. Business Process Automation Software

Business Process Automation software, or BPA software, works to simplify your mission-critical processes, minimize the need for human input (thereby reducing the risk of human error), and streamline the way you work. However, it is important to understand that forming a heavy reliance on automation software isn’t an entirely risk-free endeavor, particularly if you use the cloud-based software. While there are advantages of the cloud, there are also concerns. If your business relies on third-party software for mission-critical processes, consider a software escrow agreement, where the source code for the BPA software is held by a neutral agency and released to you should your provider go bankrupt.

4. Artificial Intelligence

Artificial Intelligence, or AI, is a key catalyst facilitating the new ‘digital transformation’; a shift from rigid business processes to more flexible approaches using the intelligent software. AI and machine-learning technologies become smarter with continued use, as they ‘learn’ more about operations. This can able your technology to identify process flow patterns, apply fixes to enhance the process, locate patterns and trends in your way of working and highlight any room for improvement. The technology can predict how your business processes will fare in the future by merging with existing business process management platforms ultimately improving continuity, lowering costs, and boosting efficiency.

5. Cloud Computing

Cloud computing has been around for a while, but it is only recently that it has become ready to support mission-critical processes and applications. Part of this readiness stems from the longevity and continued strength of cloud providers, and the ability of providers to demonstrate experience in IT management. By moving mission-critical processes to the cloud, businesses find that they have greater flexibility, enabling them to focus more on their own core competencies which, in many cases, is not IT-based. Cloud providers today are able to show solid track records in terms of security and reliability, perhaps more so than businesses themselves are able to demonstrate, meaning mission-critical applications are safe.

6. Edge Computing

Although cloud computing and edge computing are often said to be polar opposites, both technologies have the potential to completely overhaul existing mission-critical processes. While cloud computing is concerned with a central ‘hub’, edge computing is more focused on the availability of several shared-effort facilities, often located closer to the user (or on the ‘edge’). In terms of mission-critical processes, the advantage for businesses is notable low latency which can boost the speed of your processes and facilitate real-time functionality to improve accuracy and efficiency. However, not all providers are able to offer edge computing yet, and it is still considered to be an emerging technology.

7. Augmented Reality

Augmented reality already has a firm place in commerce; it’s used to try on clothes without buying, check that furniture fits in the home, or see a new car on the driveway without signing the contract. However, augmented reality, or AR, is still relatively new in terms of internal mission-critical processes, but it certainly seems to have a place. Google Glass was one of the first examples of how AR could be used in the enterprise, and how it could impact business processes. It can enable users to overlay their environment with vital information to ensure accurate troubleshooting, faster fix times, optimal productivity, better learning, and enhanced safety, all using a completely hands-free method.

About the author:
Jorge Sagastume is a Vice President at EscrowTech International, Inc. with 12 years of experience protecting IP and earning the trust of the greatest companies in the world. Jorge has been invited to speak on IP issues by foreign governments and international agencies.

MIT launches the “MIT Intelligence Quest … MIT IQ” (Video)


MIT AI IQ 87e48072-b50e-4701-b38e-f236c0c22280-original

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. Courtesy of MIT IQ

New Institute-wide initiative will advance human and machine intelligence research

MIT today announced the launch of the MIT Intelligence Quest, an initiative to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.

The announcement was first made in a letter MIT President L. Rafael Reif sent to the Institute community.

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known. (continued below)

Watch and Read About: Scott Zoldi, Director of Analytics at FICO, has published a report that “we are just at the beginning of the golden age of analytics, in which the value and contributions of artificial intelligence (AI), machine learning (AA) and of deep learning can only continue to expand as we accept and incorporate those tools into our businesses. ” And according to the expert’s predictions, in 2018, the development and use of these technologies will continue to expand and strengthen. And consider that next year:

 

(Continued)

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

“Today we set out to answer two big questions, says President Reif. “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?”

MIT IQ: The Core and The Bridge

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science. brain-quantum-2-b2b_wsf

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware.

Along with developing and advancing the technologies of intelligence, MIT IQ researchers will also investigate the societal and ethical implications of advanced analytical and predictive tools. There are already active projects and groups at the Institute investigating autonomous systems, media and information quality, labor markets and the work of the future, innovation and the digital economy, and the role of AI in the legal system.

In all its activities, MIT IQ is intended to take advantage of — and strengthen — the Institute’s culture of collaboration. MIT IQ will connect and amplify existing excellence across labs and centers already engaged in intelligence research. It will also establish shared, central spaces conducive to group work, and its resources will directly support research.

“Our quest is meant to power world-changing possibilities,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. Chandrakasan, in collaboration with Provost Martin Schmidt and all four of MIT’s other school deans, has led the development and establishment of MIT IQ.

“We imagine preventing deaths from cancer by using deep learning for early detection and personalized treatment,” Chandrakasan continues. “We imagine artificial intelligence in sync with, complementing, and assisting our own intelligence. And we imagine every scientist and engineer having access to human-intelligence-inspired algorithms that open new avenues of discovery in their fields. Researchers across our campus want to push the boundaries of what’s possible.”

Engaging energetically with partners

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab, which was announced in September 2017. MIT researchers will collaborate with each other and with industry on challenges that range in scale from the very broad to the very specific.

“In the short time since we began our collaboration with IBM, the lab has garnered tremendous interest inside and outside MIT, and it will be a vital part of MIT IQ,” says President Reif.

John E. Kelly III, IBM senior vice president for cognitive solutions and research, says, “To take on the world’s greatest challenges and seize its biggest opportunities, we need to rapidly advance both AI technology and our understanding of human intelligence. Building on decades of collaboration — including our extensive joint MIT–IBM Watson AI Lab — IBM and MIT will together shape a new agenda for intelligence research and its applications. We are proud to be a cornerstone of this expanded initiative.”

MIT will seek to establish additional entities within MIT IQ, in partnership with corporate and philanthropic organizations.

Why MIT

MIT has been on the frontier of intelligence research since the 1950s, when pioneers Marvin Minsky and John McCarthy helped establish the field of artificial intelligence.

MIT now has over 200 principal investigators whose research bears directly on intelligence. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Department of Brain and Cognitive Sciences (BCS) — along with the McGovern Institute for Brain Research and the Picower Institute for Learning and Memory — collaborate on a range of projects. MIT is also home to the National Science Foundation–funded center for Brains, Minds and Machines (CBMM) — the only national center of its kind.

Four years ago, MIT launched the Institute for Data, Systems, and Society (IDSS) with a mission promoting data science, particularly in the context of social systems. It is  anticipated that faculty and students from IDSS will play a critical role in this initiative.

Faculty from across the Institute will participate in the initiative, including researchers in the Media Lab, the Operations Research Center, the Sloan School of Management, the School of Architecture and Planning, and the School of Humanities, Arts, and Social Sciences.

“Our quest will amount to a journey taken together by all five schools at MIT,” says Provost Schmidt. “Success will rest on a shared sense of purpose and a mix of contributions from a wide variety of disciplines. I’m excited by the new thinking we can help unlock.”

At the heart of MIT IQ will be collaboration among researchers in human and artificial intelligence.

“To revolutionize the field of artificial intelligence, we should continue to look to the roots of intelligence: the brain,” says James DiCarlo, department head and Peter de Florez Professor of Neuroscience in the Department of Brain and Cognitive Sciences. “By working with engineers and artificial intelligence researchers, human intelligence researchers can build models of the brain systems that produce intelligent behavior. The time is now, as model building at the scale of those brain systems is now possible. Discovering how the brain works in the language of engineers will not only lead to transformative AI — it will also illuminate entirely new ways to repair, educate, and augment our own minds.”

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, and director of CSAIL, agrees. MIT researchers, she says, “have contributed pioneering and visionary solutions for intelligence since the beginning of the field, and are excited to make big leaps to understand human intelligence and to engineer significantly more capable intelligent machines. Understanding intelligence will give us the knowledge to understand ourselves and to create machines that will support us with cognitive and physical work.”

David Siegel, who earned a PhD in computer science at MIT in 1991 pursuing research at MIT’s Artificial Intelligence Laboratory, and who is a member of the MIT Corporation and an advisor to the MIT Center for Brains, Minds, and Machines, has been integral to the vision and formation of MIT IQ and will continue to help shape the effort. “Understanding human intelligence is one of the greatest scientific challenges,” he says, “one that helps us understand who we are while meaningfully advancing the field of artificial intelligence.” Siegel is co-chairman and a founder of Two Sigma Investments, LP.

The fruits of research

MIT IQ will thus provide a platform for long-term research, encouraging the foundational advances of the future. At the same time, MIT professors and researchers may develop technologies with near-term value, leading to new kinds of collaborations with existing companies — and to new companies.

Some such entrepreneurial efforts could be supported by The Engine, an Institute initiative launched in October 2016 to support startup companies pursuing particularly ambitious goals.

Other innovations stemming from MIT IQ could be absorbed into the innovation ecosystem surrounding the Institute — in Kendall Square, Cambridge, and the Boston metropolitan area. MIT is located in close proximity to a world-leading nexus of biotechnology and medical-device research and development, as well as a cluster of leading-edge technology firms that study and deploy machine intelligence.

MIT also has roots in centers of innovation elsewhere in the United States and around the world, through faculty research projects, institutional and industry collaborations, and the activities and leadership of its alumni. MIT IQ will seek to connect to innovative companies and individuals who share MIT’s passion for work in intelligence.

Eric Schmidt, former executive chairman of Alphabet, has helped MIT form the vision for MIT IQ. “Imagine the good that can be done by putting novel machine-learning tools in the hands of those who can make great use of them,” he says. “MIT IQ can become a fount of exciting new capabilities.”

“I am thrilled by today’s news,” says President Reif. “Drawing on MIT’s deep strengths and signature values, culture, and history, MIT IQ promises to make important contributions to understanding the nature of intelligence, and to harnessing it to make a better world.”

“MIT is placing a bet,” he says, “on the central importance of intelligence research to meeting the needs of humanity.”