A unified account of visual search using a computational model

Date

2020

Journal Title

Journal ISSN

Volume Title

Publisher

Tartu Ülikool

Abstract

Visual Search is a task ubiquitously performed by humans in everyday life. In the laboratory, to understand more about this process, experiments have characterised the time that humans need to locate a particular target object amongst others. Based on this search time’s dependence on the number of objects in the image, it is believed that two kinds of search take place. Feature search, where the target pops-out of the search image and is instantly found using a parallel search mechanism, and conjunction search, with more complex objects where the search is serial and the search time increases with the number of objects. In this work, we use a computational model to propose a unified process that can result in feature or conjunction search characteristics depending on the precision of the attention guidance mechanism. We show that the search performance can be partly explained by the precision or capacity of the encoding of distinct features that is used to guide attention during the search process.

Description

Keywords

Visual Search, Attention, Computational Neuroscience, Deep Learning, Convolutional Neural Networks

Citation