Undergraduate Research in the IVILAB
The Interdisciplinary Visual Intelligence group has an active program for involving undergraduates in research. We strongly encourage capable and interested undergraduate students to become involved with research early on. Doing so is becoming increasingly critical for preparing for grad school and has also helped many students get jobs in industry.
Link to current opportunities and the application process.
Due to overwhelming interest in the IVILAB we need to limit the number undergraduate researchers. Unfortunately, we have to turn away many undergraduates looking for research experience.
Publishing (needs updating).
Undergraduate researchers working with IVI lab have a great record of contributing sufficiently to research projects that they become authors on papers. So far, twelve undergraduates have been authors on nineteen vision lab papers and three abstracts (needs updating). Click here for the list.
IVILAB undergraduate researchers past and present (needs updating).
Students who have participated in the IVILAB as undergraduates include Matthew Johnson (honor's student, graduated December 2003), Abin Shahab (honor's student, graduated May 2004), Ekaterina (Kate) Taralova (now at CMU), Juhanni Torkkola (now at Microsoft), Andrew Winslow (now at Tufts), Daniel Mathis, Mike Thompson, Sam Martin, Johnson Truong (now at SMU), Andrew Emmott (headed to Oregon State), Ken Wright, Steve Zhou, Phillip Lee, James Magahern, Emily Hartley, Steven Gregory, Bonnie Kermgard, Gabriel Wilson, Alexander Danehy, Daniel Fried, Joshua Bowdish, Lui Lui, Ben Dicken, Haziel Zuniga, Mark fischer, Matthew Burns, Racheal Gladysz, Salika Dunatunga (honors, now at U. Penn), Kristle Schulz (honors), and Soumya Srivastava.
Examples of undergraduate research in the IVILAB (needs updating)
Understanding Scene Geometry

Funding for undergraduates provided by REU supplement to NSF
CAREER
grant IIS-0747511.
(*) Photo credit Robert Walker Photography.
Semantically Linked Instructional Content (SLIC)

SLIC is partly supported by NSF grant EF-0735191.
Aligning image caption words with image elements
There are now millions of images on-line with associated text (e.g., captions). Information in captions is either redundant (e.g., the word dog occurs, and the dog is obvious) or complementary (e.g., there is sky above the dog, but it is not mentioned). Redundant information allows us to train machine learning methods to predict one of these modalities from the other. Alternatively, complementary information in the modalities can disambiguate uncertainty (see "Word Sense Disambiguation with Pictures" below), or provide for combined visual and textual searching and data mining. Under the guidance of PHD student Luca del Pero, undergraduates Phil Lee, James Magahern, and Emily Hartley have contributed to research on using object detectors to improve the alignment of natural language captions to image data, which has already led to a publication for them. For more information on this project, contact Luca del Pero (delpero AT cs DOT arizona DOT edu).
Funding for undergraduates provided by
ONR and
REU supplement to NSF
CAREER
grant IIS-0747511.
On the left is the baseline result; the image on the right shows the result with detectors.
Simultaneously tracking many objects with applications to biological growth

This project is in collaboration with the Palanivelu lab.
Funding for this project provided by NSF grant IOS-0723421.
Identifying machine parts using CAD models

Funding for undergraduates provided by
NSF Grant 0758596 and
REU supplement to NSF
CAREER
grant IIS-0747511.
Inferring Plant Structure from Images

To get numbers for structure we fit geometric
models of plants to image data. The picture on the right shows multiple views of
an Arabidopsis plant (top), two views (bottom, left), and fits of the skeleton
to the image data, projected using camera models corresponding to those two
views. Undergraduates Sam Martin and Emily Hartley have helped collect the
image data, arrange feature extraction, and create ground truth data fits to it
for training and evaluation. This project is led by
Kyle Simek.
For more information, contact him
(ksimek AT cs DOT arizona DOT edu).
This project is in collaboration with the Palanivelu and Wing labs.
Funding for undergraduates provided by
the NSF funded
iPlant
project, via
the University of Arizona UBRP program
and
an REU supplement to NSF
CAREER
grant IIS-0747511.
Modeling and visualizing Alternaria

This project is in collaboration with the Pryor lab.
Support for undergraduates provided by TRIFF and a REU supplement to a
department of computer science NSF research infrastructure grant
Inactive and Subsumed Projects
Word sense disambiguation with pictures

Support for undergraduates provided by
TRIFF and a
REU supplement to a department of computer science NSF research infrastructure
grant.
Vision system for flying robots



Browsing large image collections

Evaluation of image segmentation algorithms
