Recent IVILAB news

  • January, 16, 2020

    ToMCAT featured in UA NEWS (link) and also picked up by the Arizona Daily Star (link)

  • November, 2019

    ToMCAT. A collaboration between the Information School (INFO), Computer Science (CS), and Family Studies and Human Development (FSHD) has been awarded a large grant to develop a theory of mind-based cognitive architecture for teams (ToMCAT). The grant ($7.5M, for 48 months) is part of the DARPA Artificial Social Intelligence for Successful Teams (ASIST) program. The PI/Co-PIs collaborating on this project are: Adarsh Pyarelal (PI), Kobus Barnard, Emily Butler, Clayton Morrison, Rebecca Sharp, Mihai Surdeanu, and Marco Antonio Valenzuela-Escarcega. Data collection for the project will take place in the Lang Laboratory, housed in the Frances McClelland Institute for Children, Youth and Families in the Norton School of Family & Consumer Science.

    The goal of the project is to build artificially intelligent agents that understand both the social and goal-oriented aspects of teams in mission-like scenarios (e.g., search-and-rescue missions), and are able to reason about possible interventions. The agent, ToMCAT, needs to model human players' affect and beliefs about the situation and about each other's affect and beliefs (theory of mind). We will ground this work in extensive measurements of humans interacting in small teams, that will include audio, video, eye tracking, electrocardiography (EKG), electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), and self report. The participants will execute missions within a Minecraft environment with one, two, three, or four human players interacting with the ToMCAT agent.

    Research areas. One unique aspect of this project is that we will use simultaneous EEG and fNIRS brain recording from all human team members to further our understanding of social coordination in teams. We expect the series of experiments will provide a large amount of very unique data. ToMCAT's evolving theories of mind will be implemented using dynamic Bayesian networks interacting with latent low-level data representation provided by neural networks. In addition, we will need to understand dialogue as indicative of affect, plans, and mission goals. Finally, ToMCAT will need to both understand team plans and also create its own plans.

    Further information is available on the project web site . This project started Nov 1, 2019. As we move forward, we will update this website regularly.

  • See all news

Welcome to IVILAB

IVILAB is led by Kobus Barnard. If you cannot find what you are looking for, which is likely while we are working on this new web presence, try his homepage.

To make sense of the world from data we need to connect it to relevant meaning systems. The IVILAB addresses this directly by working on representations that respect semantics and theory, and linking them to data. We apply this methodology to wide range of fascinating problems. Examples from current work include:
1) learning the structure of common objects;
2) stochastic geometric models for plants and microscopic fungus;
3) representations of neuron form;
4) indoor scene understanding;
5) tracking humans and understanding their activities; and
6) emotion dynamics in close relationships.

For each of these disparate problems, covering entities, environments, and processes, our representations enable making connections to broader endeavors. For example, we seek to extract geometric form of plants in terms of organs such as stems, leaves, and flowers, as quantifying their form and their relationships (e.g., distributions over branching tendency and angles of branches) can be linked to other quantities associates with environmental, molecular, and survival data.

Our models attempt to explain data variability through mechanistic and theoretical considerations. For example, a human tracker based on 2D linkage of observations can be confused by people walking behind others, whereas a 3D tracker expects such occlusion and, in fact, can make use of that information. However, capturing the remaining unexplained variability leads us to statistical characterizations of the observations based on our explanatory models. We use Bayesian statistical methodology to combine all sources of information that link representation to data.

Our approach focuses on models with particular attributes, and we explicitly separate modeling concerns from inference (fitting models to interpret data or learn model parameters). This leads to challenging inference which we handle using various forms of MCMC sampling. For this we have developed significant expertise and software infrastructure over the last decade. Freeing ourselves from inference concerns while modeling enables us to more effectively collaborate with others on model development. In particular, we can work together on translating theoretical ideas into mathematical models, without being influenced by which modelling/inference combination are available in existing software.