Recent IVILAB news

  • July 11, 2016
    Congratulations to Kyle Simek for getting his paper "Branching Gaussian Processes with Applications to Spatiotemporal Reconstruction of 3D Trees," accepted to ECCV 2016. His coauthors are Ravishankar Palanivelu and Kobus Barnard.
  • April 22, 2016
    IVILA lead, Kobus Barnard, has finally finshed his book “Computational methods for integrating vision and language,” (Synthesis Lectures on Computer Vision, April 2016, 227 pages, Morgan & Claypool)
  • April 19, 2016
    Congratulations to four IVILAB students for successfully defending their dissertations in the period from April 14 to April 19. In chronological order:

    Andrew Predoehl: A Statistical Model of Recreational Trails

    Kyle Simek: Branching Gaussian Process Models for Computer Vision

    Jinyan Guan: Bayesian Generative Modeling for Complex Dynamical Systems

    Yekaterina Kharitonova: Geometry of Presentation Videos and slides, and the Semantic Linking of Instructional Content (SLIC) System

  • See all news

Welcome to IVILAB

IVILAB is led by Kobus Barnard. If you cannot find what you are looking for, which is likely while we are working on this new web presence, try his homepage.

To make sense of the world from data we need to connect it to relevant meaning systems. The IVILAB addresses this directly by working on representations that respect semantics and theory, and linking them to data. We apply this methodology to wide range of fascinating problems. Examples from current work include:
1) learning the structure of common objects;
2) stochastic geometric models for plants and microscopic fungus;
3) representations of neuron form;
4) indoor scene understanding;
5) tracking humans and understanding their activities; and
6) emotion dynamics in close relationships.

For each of these disparate problems, covering entities, environments, and processes, our representations enable making connections to broader endeavors. For example, we seek to extract geometric form of plants in terms of organs such as stems, leaves, and flowers, as quantifying their form and their relationships (e.g., distributions over branching tendency and angles of branches) can be linked to other quantities associates with environmental, molecular, and survival data.

Our models attempt to explain data variability through mechanistic and theoretical considerations. For example, a human tracker based on 2D linkage of observations can be confused by people walking behind others, whereas a 3D tracker expects such occlusion and, in fact, can make use of that information. However, capturing the remaining unexplained variability leads us to statistical characterizations of the observations based on our explanatory models. We use Bayesian statistical methodology to combine all sources of information that link representation to data.

Our approach focuses on models with particular attributes, and we explicitly separate modeling concerns from inference (fitting models to interpret data or learn model parameters). This leads to challenging inference which we handle using various forms of MCMC sampling. For this we have developed significant expertise and software infrastructure over the last decade. Freeing ourselves from inference concerns while modeling enables us to more effectively collaborate with others on model development. In particular, we can work together on translating theoretical ideas into mathematical models, without being influenced by which modelling/inference combination are available in existing software.