BurningEyedeas is a research group that investigates perceptual organization, human vision, natural vision processing (computer vision) and image labeling. Our research is primarily based on a body of ongoing experiments conducted at the Burning Man Arts festival. We plan to make our dataset available as a basis for research on image segmentation and image region detection. We are currently updating our data to incorporate the August 2010 experiment. Check back in a few weeks for the results.
We see our dataset as a complement to The Berkeley Segmentation DataSet and Benchmark (Martin, Fowlkes, Tal, & Malik (2001), which provides a corpus of images hand segmented and annotated for figure-ground status. The Berkeley set is useful for studying local mechanisms; our set provides data for studying a broader definition of figural status and scene perception.
Our broader definition of figural status includes the concept of a spatial taxon. A spatial taxon* (see figure 1) not only includes objects, but groups of objects that take on the Gestalt of Figure, allowing an image to have multiple regions of interest, which may overlap. We are also looking at how figural status interacts with language specificity.
This research underlies the Natural Vision Processing and Labeling system used by Eyegorithm, Inc.
Figure 1: Spatial Taxons. An image can be organized by spatial taxon* and parsed into an information architecture that allows for a range of inclusiveness in the status of Figure. In the photograph above, Figure can include the butterfly and flower or the butterfly alone. The spatial taxons are defined relationally in "layers of abstraction."
Barghout, Lauren. Empirical data on the configural architecture of human scene perception. Vision Science Society Annual meeting. 2009.
Email all inquiries to email@example.com