Monthly Archives: August 2016

Media Lab faculty

Fadel Adib SM ’13, PhD ’16 has been appointed an assistant professor in the Program in Media Arts and Sciences at the MIT Media Lab, where he leads the new Signal Kinetics research group. His group’s mission is to explore and develop new technologies that can extend human and computer abilities in communication, sensing, and actuation.

Adib comes to the lab from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), where he received his PhD and master’s degrees in electrical engineering and computer science, supervised by MIT professor of electrical engineering and computer science Dina Katabi. Adib’s doctoral thesis, “Wireless Systems that Extend Our Senses,” demonstrates that wireless signals can be used as sensing tools to learn about the environment, thus enabling us to see through walls, track human gestures, and monitor human vital signs from a distance. His master’s thesis, “See Through Walls with Wifi,” won the best master’s thesis award in computer science at MIT in 2013. He earned his bachelor’s degree in computer and communications engineering from the American University of Beirut, in Lebanon, the country of his birth, where he graduated with the highest GPA in the university’s digitally-recorded history.

“We can get your locations, we can get your gestures, we can get your breathing,” Adib said at a Media Lab event in October 2016. “And we can even get your heart rate—all without putting any sensor on your body. This is exactly what our research is about.” Signal Kinetics researchers tap into the invisible signals that surround us — from WiFi to brain waves. The aim is to uncover, analyze, and engineer these natural and human-made networks, drawing on tools from computer networks, signal processing, machine learning, and hardware design.

“We are living in a sea of radio waves,” Adib told the Media Lab audience. “As our bodies move, we modulate these radio waves, similar to how you create waves when you move around in a pool of water. While we cannot see these with our naked eye, we can extract them and we can build intelligence in the environment to enable a large number of applications and extend our senses using wireless technology.” The technology is applicable to a broad range of needs: from monitoring an infant’s breathing or an elderly person who has fallen, to determining whether someone has sleep apnea, to detecting survivors in a burning building. The group’s research also has potential applications for gaming and filmmaking.

In 2015, Forbes magazine selected Adib among the 30 Under 30 Who Are Moving the World in Enterprise Technology. In 2014, MIT Technology Review chose him as one of the world’s 35 top innovators under the age of 35. His research has been identified as one of the 50 ways MIT has transformed computer science over the past 50 years.

“Fadel’s work in wireless sensing is groundbreaking and opens up all sorts of new opportunities,” says the Media Lab’s Pattie Maes, the Alex W. Dreyfoos Professor of Media Technology and academic head of the Program in Media Arts and Sciences. “I can’t wait to see what impact his presence in the lab will have on many of the research topics that we focus on, including Smart Cities, Responsive Environments, Extreme Bionics, Extended Intelligence, Tools for Health and Wellbeing, and more.”

Data scientists could accomplish in days

Last year, MIT researchers presented a system that automated a crucial step in big-data analysis: the selection of a “feature set,” or aspects of the data that are useful for making predictions. The researchers entered the system in several data science contests, where it outperformed most of the human competitors and took only hours instead of months to perform its analyses.

This week, in a pair of papers at the IEEE International Conference on Data Science and Advanced Analytics, the team described an approach to automating most of the rest of the process of big-data analysis — the preparation of the data for analysis and even the specification of problems that the analysis might be able to solve.

The researchers believe that, again, their systems could perform in days tasks that used to take data scientists months.

“The goal of all this is to present the interesting stuff to the data scientists so that they can more quickly address all these new data sets that are coming in,” says Max Kanter MEng ’15, who is first author on last year’s paper and one of this year’s papers. “[Data scientists want to know], ‘Why don’t you show me the top 10 things that I can do the best, and then I’ll dig down into those?’ So [these methods are] shrinking the time between getting a data set and actually producing value out of it.”

Both papers focus on time-varying data, which reflects observations made over time, and they assume that the goal of analysis is to produce a probabilistic model that will predict future events on the basis of current observations.

Real-world problems

The first paper describes a general framework for analyzing time-varying data. It splits the analytic process into three stages: labeling the data, or categorizing salient data points so they can be fed to a machine-learning system; segmenting the data, or determining which time sequences of data points are relevant to which problems; and “featurizing” the data, the step performed by the system the researchers presented last year.

The second paper describes a new language for describing data-analysis problems and a set of algorithms that automatically recombine data in different ways, to determine what types of prediction problems the data might be useful for solving.

According to Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems and senior author on all three papers, the work grew out of his team’s experience with real data-analysis problems brought to it by industry researchers.

“Our experience was, when we got the data, the domain experts and data scientists sat around the table for a couple months to define a prediction problem,” he says. “The reason I think that people did that is they knew that the label-segment-featurize process takes six to eight months. So we better define a good prediction problem to even start that process.”

In 2015, after completing his master’s, Kanter joined Veeramachaneni’s group as a researcher. Then, in the fall of 2015, Kanter and Veeramachaneni founded a company called Feature Labs to commercialize their data-analysis technology. Kanter is now the company’s CEO, and after receiving his master’s in 2016, another master’s student in Veeramachaneni’s group, Benjamin Schreck, joined the company as chief data scientist.

Data preparation

Developed by Schreck and Veeramachaneni, the new language, dubbed Trane, should reduce the time it takes data scientists to define good prediction problems, from months to days. Kanter, Veeramachaneni, and another Feature Labs employee, Owen Gillespie, have also devised a method that should do the same for the label-segment-featurize (LSF) process.

To get a sense of what labeling and segmentation entails, suppose that a data scientist is presented with electroencephalogram (EEG) data for several patients with epilepsy and asked to identify patterns in the data that might signal the onset of seizures.

Human intuition to planning algorithms

Every other year, the International Conference on Automated Planning and Scheduling hosts a competition in which computer systems designed by conference participants try to find the best solution to a planning problem, such as scheduling flights or coordinating tasks for teams of autonomous satellites.

On all but the most straightforward problems, however, even the best planning algorithms still aren’t as effective as human beings with a particular aptitude for problem-solving — such as MIT students.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory are trying to improve automated planners by giving them the benefit of human intuition. By encoding the strategies of high-performing human planners in a machine-readable form, they were able to improve the performance of competition-winning planning algorithms by 10 to 15 percent on a challenging set of problems.

The researchers are presenting their results this week at the Association for the Advancement of Artificial Intelligence’s annual conference.

“In the lab, in other investigations, we’ve seen that for things like planning and scheduling and optimization, there’s usually a small set of people who are truly outstanding at it,” says Julie Shah, an assistant professor of aeronautics and astronautics at MIT. “Can we take the insights and the high-level strategies from the few people who are truly excellent at it and allow a machine to make use of that to be better at problem-solving than the vast majority of the population?”

The first author on the conference paper is Joseph Kim, a graduate student in aeronautics and astronautics. He’s joined by Shah and Christopher Banks, an undergraduate at Norfolk State University who was a research intern in Shah’s lab in the summer of 2016.

The human factor

Algorithms entered in the automated-planning competition — called the International Planning Competition, or IPC — are given related problems with different degrees of difficulty. The easiest problems require satisfaction of a few rigid constraints: For instance, given a certain number of airports, a certain number of planes, and a certain number of people at each airport with particular destinations, is it possible to plan planes’ flight routes such that all passengers reach their destinations but no plane ever flies empty?

A more complex class of problems — numerical problems — adds some flexible numerical parameters: Can you find a set of flight plans that meets the constraints of the original problem but also minimizes planes’ flight time and fuel consumption?

Finally, the most complex problems — temporal problems — add temporal constraints to the numerical problems: Can you minimize flight time and fuel consumption while also ensuring that planes arrive and depart at specific times?

For each problem, an algorithm has a half-hour to generate a plan. The quality of the plans is measured according to some “cost function,” such as an equation that combines total flight time and total fuel consumption.

Shah, Kim, and Banks recruited 36 MIT undergraduate and graduate students and posed each of them the planning problems from two different competitions, one that focused on plane routing and one that focused on satellite positioning. Like the automatic planners, the students had a half-hour to solve each problem.

“By choosing MIT students, we’re basically choosing the world experts in problem solving,” Shah says. “Likely, they’re going to be better at it than most of the population.”

Computer scientist with machine learning

Regina Barzilay is working with MIT students and medical doctors in an ambitious bid to revolutionize cancer care. She is relying on a tool largely unrecognized in the oncology world but deeply familiar to hers: machine learning.

Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science, was diagnosed with breast cancer in 2014. She soon learned that good data about the disease is hard to find. “You are desperate for information — for data,” she says now. “Should I use this drug or that? Is that treatment best? What are the odds of recurrence? Without reliable empirical evidence, your treatment choices become your own best guesses.”

Across different areas of cancer care — be it diagnosis, treatment, or prevention — the data protocol is similar. Doctors start the process by mapping patient information into structured data by hand, and then run basic statistical analyses to identify correlations. The approach is primitive compared with what is possible in computer science today, Barzilay says.

These kinds of delays and lapses (which are not limited to cancer treatment), can really hamper scientific advances, Barzilay says. For example, 1.7 million people are diagnosed with cancer in the U.S. every year, but only about 3 percent enroll in clinical trials, according to the American Society of Clinical Oncology. Current research practice relies exclusively on data drawn from this tiny fraction of patients. “We need treatment insights from the other 97 percent receiving cancer care,” she says.

To be clear: Barzilay isn’t looking to up-end the way current clinical research is conducted. She just believes that doctors and biologists — and patients — could benefit if she and other data scientists lent them a helping hand. Innovation is needed and the tools are there to be used.

Barzilay has struck up new research collaborations, drawn in MIT students, launched projects with doctors at Massachusetts General Hospital, and begun empowering cancer treatment with the machine learning insight that has already transformed so many areas of modern life.

Machine learning, real people

At the MIT Stata Center, Barzilay, a lively presence, interrupts herself mid-sentence, leaps up from her office couch, and runs off to check on her students.

She returns with a laugh. An undergraduate group is assisting Barzilay with a federal grant application, and they’re down to the wire on the submission deadline. The funds, she says, would enable her to pay the students for their time. Like Barzilay, they are doing much of this research for free, because they believe in its power to do good. “In all my years at MIT I have never seen students get so excited about the research and volunteer so much of their time,” Barzilay says.

At the center of Barzilay’s project is machine learning, or algorithms that learn from data and find insights without being explicitly programmed where to look for them. This tool, just like the ones Amazon, Netflix, and other sites use to track and predict your preferences as a consumer, can make short work of gaining insight into massive quantities of data.

Applying it to patient data can offer tremendous assistance to people who, as Barzilay knows well, really need the help. Today, she says, a woman cannot retrieve answers to simple questions such as: What was the disease progression for women in my age range with the same tumor characteristics?

Special purpose chip

The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with their principal computing devices.

In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.

In a real-world application, that probably translates to a power savings of 90 to 99 percent, which could make voice control practical for relatively simple electronic devices. That includes power-constrained devices that have to harvest energy from their environments or go months between battery charges. Such devices form the technological backbone of what’s called the “internet of things,” or IoT, which refers to the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

“Speech input will become a natural interface for many wearable applications and intelligent devices,” says Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT, whose group developed the new chip. “The miniaturization of these devices will require a different interface than touch or keyboard. It will be critical to embed the speech functionality locally to save system energy consumption compared to performing this operation in the cloud.”

“I don’t think that we really developed this technology for a particular application,” adds Michael Price, who led the design of the chip as an MIT graduate student in electrical engineering and computer science and now works for chipmaker Analog Devices. “We have tried to put the infrastructure in place to provide better trade-offs to a system designer than they would have had with previous technology, whether it was software or hardware acceleration.”

Price, Chandrakasan, and Jim Glass, a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, described the new chip in a paper Price presented last week at the International Solid-State Circuits Conference.

The sleeper wakes

Today, the best-performing speech recognizers are, like many other state-of-the-art artificial-intelligence systems, based on neural networks, virtual networks of simple information processors roughly modeled on the human brain. Much of the new chip’s circuitry is concerned with implementing speech-recognition networks as efficiently as possible.

But even the most power-efficient speech recognition system would quickly drain a device’s battery if it ran without interruption. So the chip also includes a simpler “voice activity detection” circuit that monitors ambient noise to determine whether it might be speech. If the answer is yes, the chip fires up the larger, more complex speech-recognition circuit.