IVILAB Research Projects
:: CompTIES :: PSI :: objects :: rooms :: plants :: neurons :: fungus :: vision_and_language :: trails :: SLIC :: tracking ::
See also: IVILAB Undergraduate research
Computational Temporal Emotion Systems
This research brings theory-driven Bayesian modeling and inference into the domain of temporal emotional interactions within personal relationships. We are also developing a shared computational infrastructure for researchers in this domain in collaboration with iPlant. The iPlant project is an NFS funded computational infrastructure project initially focused on plant biology. For more details on compTIES, please see the project website . CompTIES is in collaboration between Emily Butler (PI, Family Studies and Human Development), Kobus Barnard (IVILAB / CS, Clay Morrison (IVILAB / SISTA), Matthias Mehl (Psychology).
Funded by NSF grant BCS-1322940.
Link to:   1) Project website   and   2) Our NSF funded workshop.
Understanding activities from video: Persistent Stare through Imagination (PSI)

The image shows an alignment of an activity video (person walking) and a physics based simulator (Cartwheel3D). If we can understand the movie in terms of physics, then we can make natural inferences such as a box that is kicked out the way is light, and one that is tripped over is probably heavy.
This project is a collaboration between researchers specializing at different levels of representation: 1) low level feature based recognition (Deva Ramanan's group at UCI), 2) mid level representations of what actors are in the scene and their trajectories (IVILAB), 3) learning behaviour models from image data (the SISTA robotics lab led by Ian Fasel), and 4) language level semantic understanding of activities (the SISTA AI lab led by project PI, Paul Cohen).
Link to:   1) ICCV 2013 paper.
Funded by DARPA through the Mind's Eye Program
Learning Models of Object Structure
This project is developing approaches for learning stochastic 3D geometric models for object categories from image data. Representing objects and their statistical variation in 3D removes the confounds of the imaging process, and is more suitable for understanding the relation of form and function and how the object integrates into scenes.
The image to the right shows a simple model for chairs learned from a modest set
of 2D images using the representation of a collection of connected blocks and
the key assumption that the topology is consistent across the
object category. The particular instances that are fit collaterally are shown in
red. For the category we learn the topology and the statistics of the block
parameters.
Initial work formed the bulk of Joseph Schlecht's dissertation.
Link to:   1) Project page,   and   2) NIPS'09 paper.
Funded by NSF CAREER grant IIS-0747511.
Generative Modeling of Indoor Scenes


The image shows fits for two rooms. Red boxes are room boundaries, green boxes
are frames (pictures, windows, doors), and the blue boxes are furniture bounding
boxes.
Contributions to this work have been made by Luca del Pero, Joseph Schlecht, Ernesto Brau, Jinyan Guan, and undergraduates Emily Hartley, Bonnie Kermgard, Joshua Bowdish, Daniel Fried, and Andrew Emmott.
Link to:   CVPR'13 paper,   and data.
Link to:   CVPR'12 paper,   and data
Link to:   CVPR'11 paper,   and data.
Funded by NSF CAREER grant IIS-0747511.
Plant Structure from Images

Quantifying plant geometry is critical for understanding how subtle details in form are caused by molecular and environmental changes. Developing automated methods for determining plant structure from images is motivated by the difficulty of extracting these details by human inspection, together with the need for high throughput experiments where we can test against a large number of variables.
To get numbers for structure we fit geometric models of plants to image data. The top picture shows how features detected in multiple views from calibrated cameras can provide hypothesis for structure that become "data-driven" samples in an MCMC approach to fitting structure. The next row shows two views (left), and corresponding fits of the skeleton to the image data, projected using camera models corresponding to those two views. The heavy lifting on this project is being done by PHD student Kyle Simek.
This project is in collaboration with the
Palanivelu Lab
and
Rod Wing's group.
Funding.
Kyle has held
a Department of Education GAANN fellowship during the first part of this project. He has been helped by undergraduates
funded by
the NSF
iPlant
project via
Morphology of Brain Neurons

The image (*) shows an example of a wild type neuron (left), and a "filagree" phenotype associated with a particular mutation (right). Notice that the filagree neurites exhibit significantly more curvature. We are able to distinguish individuals from these populations relatively reliably using image processing methods to quantify the curvature. However, due to the huge variation in phenotype, developing such a method for each one is impractical. Rather, we are pursuing a structural modeling approach, together with statistical inference to learn the model parameters for each phenotype, and to fit learned models to individuals not used for learning.
(*) Image source: Kraft et al., J. Neuroscience, 2006.
This work in collaboration with Linda Restifo and her research group.
Fungal Structure from Image Stacks


Initial heavy lifting was done by PHD student Joseph Schlecht, guided by modeling work by undergraduate Kate Taralova (see below). Undergraduate Johnson Truong has worked on a fast approach for computing the blurring for model hypotheses.
Link to:   1) CVPR'07 paper.
This project is in collaboration with the Pryor lab.
Linking Vision and Language


Recent IVILAB work on this topic has focused on aligning image caption or key words with visual features. The images on the right show how specialized object detectors can help improve this (labels for only the 10 largest regions are shown). The second example (right) shows that the object detector for "bird" can be used to align "eagle", given that we can look up the relation between the two using WordNet. Heavy lifting is being done by PHD student Luca del Pero, with a lot of help from undergraduate students Philip Lee, Emily Hartley, and James Magahern.
Link to:   1) ACM MM'11 paper,   2) CVPR'07 paper.
Link to our "words and pictures" research since 2000.
Support for this project has come from NSF, ONR, and TRIFF.
Finding Trails in Satellite Photos

Link to:   1) CVPR'13 paper.     2) CVPR'08 paper.
Funding. Both Scott and Andrew have held Department of Education GAANN fellowships while working on this project.
Semantically Linked Instructional Content (SLIC)

For much more information on the project, including publications and a live demo, see the SLIC project page.
SLIC is partly supported by NSF grant EF-0735191.
Simultaneously tracking many objects with applications to biological growth

We are currently working on a statistical model for pollen tube / ovule interaction behaviour given the tracks. This work is being led by PHD student Ernesto Brau with help from undergraduate researcher Phil Lee.
Link to:   1) CVPR'11 paper,   2) Data,   and   3) Code.
This project is in collaboration with the Palanivelu lab.
Funding for this project provided by NSF grant IOS-0723421.
Identifying machine parts using CAD models

This project is being let by PHD student Luca del Pero, with a lot of help from undergraduates Emily Hartley and Andrew Emmott.
Funding for undergraduates provided by
NSF Grant 0758596 and
REU supplement to NSF
CAREER
grant IIS-0747511.
Back to top
Inactive and Subsumed Projects
Modeling and visualizing Alternaria

This project is in collaboration with the Pryor lab.
Support for undergraduates provided by TRIFF and a REU supplement to a
department of computer science NSF research infrastructure grant
Evaluation methodology for automated region labeling

Link to:   1) IJCV paper   and   2) Data and Code.
Funding provided by TRIFF.
Human based evaluation of content based image retrieval

Funding provided by TRIFF.
LSST association pipeline
The endeavour to build the Large Synoptic Survey Telescope (LSST) project is a huge undertaking. See lsst.org for details. Not surprisingly, a significant part of the system is the data processing pipeline. We have worked on the temporal linking observations of objects assumed to be asteroids to establish orbits and build data systems for querying the emerging time/space catalog. One of the goals of this research is to help identify potentially hazardous asteroids. This is joint work with Alon Efrat, Bongki Moon, and the many individuals working on the LSST project.Link to:   1) overview paper.
Funding provided by a sub-contract to NSF award #0551161;