FARM REPORT

Machines Have a Lot to Learn

Students stretch computers' decision-making capabilities.

May/June 2015

Reading time min

Machines Have a Lot to Learn

Photo: Norbert Von Der Groeben / Stanford News Service

Computer Science 229, the fall-quarter course in Machine Learning, has become an enrollment phenomenon. It attracted more than 700 students in 2014 and has drawn national media attention, as has its instructor, associate professor Andrew Ng.

Ng is a leader in artificial intelligence research, and it was his online Machine Learning course in 2011 that generated an enrollment of more than 100,000 and helped spur the founding of Coursera, a for-profit company that offers free online courses to anyone.

Machine Learning is about writing programming instructions (algorithms) that enable computers to "learn" how to analyze data and perform tasks. Students recognize the growing relevance of that kind of artificial intelligence, says Ng. "With the increased digitization of our society, we now have massive amounts of data," he notes. "And so the ability to write software to create value out of this data—which CS229 teaches—is increasingly in high demand."

Proof of that value is conspicuous in what Ng calls "the diversity and richness" of the work students produce. The course's difficulty level is categorized as advanced undergraduate/beginning graduate, and the four projects highlighted here include goals for improving computerized decisions in situations ranging from medical care to football outcomes.

A photo of a hummingbird in mid flight. It is difficult to make out the appearance of its wings as they flutter rapidly.HOLD STILL: It can be hard to capture some birds' physical details (Photo: Marcial Quintero / Wikimedia)

Name That Bird

Aditya Bhandari, Ameya Joshi and Rohit Patki, all MS students.

The project: Identifying bird species from images is a challenge—even professional bird-watchers sometimes disagree. Intraclass variance is high due to differences in lighting, background and positioning (for example, perched birds that are partially obscured by branches). Our project employed machine learning to help amateur bird-watchers identify bird species from the images they capture.

The work: We obtained a data set of 12,000 images of 200 species of birds. We used characteristic features like the color of the bird's back, and its wing and beak shape, to build a model that predicts the species based on this data.We implemented various machine learning algorithms and fine-tuned the best algorithm to push the accuracy as high as possible.

The results: We could predict the species with an accuracy of 57 percent, which we consider acceptable, since there were 200 bird species to differentiate. In the future, we plan to build a mobile application that can help a bird-watcher identify the species just by clicking on a photo.

The Sounds Of Music

Ryan Diaz, Aaron Kravitz and Eliza Lupone, all '15.

The project: Our goal was to create an algorithm that could identify the genre of a song based upon its digital sound file. It's an easy task for the human ear, but to a computer, a song is simply a collection of numbers that specify a sound wave, and not a catchy combination of lyrics, melody and rhythm.

The work: To develop our algorithm, we worked with a data set of 1 million popular songs from 10 different genres, and with their known features, including tempo, time signature and timbre, for small segments of each recording. Using techniques discussed in class and our individual knowledge, we made various attempts to improve the way machines recognize genre, most notably by identifying the loudest point of a recording and extracting information from that segment—a variation on the idea of an "acoustic fingerprint" that the popular smartphone application Shazam uses—for computer analysis.

The results: After some fine-tuning of the algorithms, we were able to develop a system that can correctly identify a song's genre (out of 10 possibilities) 61.87 percent of the time. (Random picks, over 10 genres, would get it right only about 10 percent of the time.) Similar ongoing work in the field includes automatic tagging of songs on the popular music website last.fm.

When Hearts Skip A Beat

Richard Tang, MS student, and Saurabh Vyas, PhD student.

The project: There is a great clinical need for accurate detection and classification of cardiovascular arrhythmias. Automated alert systems in intensive-care units, for example, produce a large number of false alarms that waste staff resources or encourage lax monitoring. If arrhythmias are detected early enough, potential life-threatening conditions, such as heart failure, can be avoided. 

An electrocardiogram (ECG) provides a surrogate representation of cardiac activity; analysis of ECGs can allow for accurate classification of arrhythmias. We used a UC-Irvine data set to compare the effectiveness of four learning algorithms for arrhythmia classification. In particular, we employed a grouping paradigm to isolate features that are physiologically interconnected, in order to address the relative importance of each feature.

The work: Most automated methods that attempt to classify different types of arrhythmias rely on being able to exact "features," or key identifying attributes, that reliably distinguish one kind of arrhythmia from another. In the case of ECG data, these features are often pieces of the raw ECG signal at various time points or the relationships between these time points, for example. While these features are fine from a theoretical standpoint, they often have little clinical meaning and therefore offer no insight into how or why a system fails in certain situations. Our approach relied on converting the ECG data into five clinically and physiologically relevant groups of features. Our features captured information ranging from the age and gender of the patient to such data as the duration and strength of the depolarization (change in electrical activity) of the left and right ventricles. Abnormalities in such attributes are associated with different types of arrhythmias. While other projects have also tried to use physiological features, they have been unable to design a principled method for assigning importance to each feature. For example, research suggests that while electrical activity in the left ventricle plays a big role in judging heart health, other features such as age and gender also play a role. Our system "learned"—from a data set of historical patient data—how much each block of features should contribute to the final classification.

The results: Our early results show that we are able to classify arrhythmias into one of five clinically relevant groups with an accuracy of approximately 90 percent. Furthermore, the system analyzes results in less than one minute on a laptop. In a clinical setting, we expect this system to run in real time and hence alert staff of potential heart dysfunction.The results also provide a vehicle for physicians to directly inform and interpret the features used by the system to make its decision. This could lead to fruitful partnerships between engineers, scientists and clinicians.

Place Your Bets

PhD students Steve Hoerning, Bobak Moallemi and Matthew Wilson, '13. 

The project: Fantasy football may seem like just an online game, but close to 33 million people participated in fantasy football leagues last year, spending around $800 million on media products. Players compete to predict how well individual players will perform in a given week. We used machine learning to estimate how many passing touchdowns a quarterback would throw in an upcoming matchup.

The work: We created mathematical models using statistics from previous NFL games and produced the most likely number of passing touchdowns in the next game. Choosing the statistics wasn't easy: An NFL game produces reams of numbers. We created many models with different variables and tested their performance. The statistics that best predicted passing touchdowns were used in our final model.

The results: The predictions made by our best model compared favorably to predictions made by ESPN, which estimates passing touchdowns for each week of the NFL season. Through the first 10 weeks of the 2014 season, ESPN's predictions were accurate 33 percent of the time, while our model was accurate 39 percent of the time. However, ESPN's estimates outperformed ours with respect to mean squared error, a measure that takes into account how far off predictions are when they miss.

Trending Stories

  1. Let It Glow

    Advice & Insights

  2. Meet Ryan Agarwal

    Athletics

  3. Neurosurgeon Who Walked Out on Sexism

    Women

  4. Art and Soul

    School of Humanities & Sciences

  5. Three Cheers

    Alumni Community

You May Also Like

© Stanford University. Stanford, California 94305.