Pre-attentive Processing and Its Application to Interface Design

A shortened version of the report, focusing solely on the design review, is available here.

overview

Pre-attentive processing is the collection of information without conscious effort. Thispre-attentive process, occurring within 250-500 milliseconds, was critical inour development as a species. Given the vast amount of information found in ourfield of view, processing critical pieces of this information independent of attentional load, e.g., threats, would be advantageous to our survival (Lang et al., 2016; Sherman &Usrey, 2021). In his book, The Ravenous Brain: How the New Science ofConsciousness Explains Our Insatiable Search for Meaning, Daniel Bor wrote,"The process of combining more primitive pieces of information to create something more meaningful is a crucial aspect both of learning and of consciousness and is one of the defining features of human experience" (Bor, 2012, p. 125). Though this hints (perhaps not-so-subtly)at a conscious, top-down interplay, the overall sentiment applies. The goal of design is to inform, whether it be through a life-critical interface like those used by first responders or less-critical tasks like sifting through massive amounts of data. Design, at every step, must work for the user. Therefore, pre-attentive design practices should create the underlying organizational relationships that allow the user to effortlessly construct a map to be utilized later stages of conscience search.

After reviewing the neurological and psychophysical aspects of pre-attentive visual processing, the World Bank DataBank interface will be analyzed through the lens of pre-attentive perceptual organization.

feature integration theory

Before discussing the neuroscience involved in pre-attentive processing, it is essential to discuss a model that has helped define how we can attend to objects pre-attentively. In their work, A Feature-Integration Theory of Attention, Treisman and Gelade explained: "features are registered early, automatically, and in parallel across the visual field, while objects are identified separately and only at a later stage, which requires focused attention." (Treisman and Gelade, 1980, p. 98). Their work advanced previous work done by Ulric Neisser in 1967 (Wolfe & Robertson, 2012). Feature Integration Theory divides our attention into two stages, pre-attentive and attentive. The first stage postulates that through a pre-attentive "feature search," objects in the visual field are viewed at the elemental level. Certain visual elements like color, form, movement, and spatial positioning (Healey et al., 1993) "spontaneously segregate (pop-out)" regardless of other distractors (Pomerantz et al., 1977). These basic elements are compiled into a series of feature maps combined into a master map (salience map) that directs serial search during a later, focused attention stage, as shown with the two models in figure 1.

However, fully understanding pre-attentive perception starts with understanding the underlying neuroanatomy of vision.
Figure 1: Models of Feature Integration Theory. Note, The bottom-up attentional model for early visual feature detection as proposed by Koch and Ullman (left) and Itti et al. (right) (Itti et al.,2005).  

the neuroscience of pre-attentive visual processing

Visual transduction isthe process by which light projected on the retina is transformed into neuralactivity. These neural "signals" pass their image-specificinformation in a hierarchical feedforward fashion to specific anatomical areas,each playing their part in the processing of visual images (Kafaligonul et al., 2015). 

retinal ganglion cells

As light maps onto theretina, rod and cone photoreceptors transduce this image into neural signals.These neural signals are passed to retinal ganglion cells through bipolar cells. Retinal ganglion cells form the output pathway of the retina, transmitting highly processed and integrated signals to the visual processing areas in the brain. The primary structure of retinal ganglion cells is an ON- and OFF-center arrangement. ON-center cells are depolarized by illumination of their receptive field center (RFC), while OFF-center cells are depolarized by decreased illumination of their RFC (van Wyk et al., 2009). This depolarization, which also includes chemical contributions from inhibitor-type horizontal and amacrine cells (found at either end of the bipolar cells), impedes the transmission of signals to neighboring ganglion cells, creating a reaction known as lateral inhibition. This reaction leads to exaggerated contrast and promotes edge detection in pre-attentive visual perception. Once the neural signal reaches the retinal ganglion cell, it leaves the eye through ganglion axons that form the optic nerve (Blasdel, 2001). The neural signals will travel based on their visual field from this point forward. Nasal retina signals will cross at the optic chiasm and join with temporal retina signals of the opposite eye. An important note is that when the light initially hit the retina, it formed a representational map within the ganglion cells. This map, called a retinotopic map, will project across connected neurons at each space within the visual process, from its origins at the retina to the LGN and visual cortex (Blasdel, 2001; Itti et al., 2005).

the lateral geniculate nucleus

The lateral geniculate nucleus (LGN) is the relay center for the visual system. Here in the LGN, specialized retinal ganglion cells called M-Type and P-Type cells are parsed into the six layers of the LGN (Nassau, 1998). The motion-sensitive M-Type cells synapse with layers 1 and 2 while P-Type cells, sensitive to color, fine detail, and slow-moving or still objects, synapse with layers 3, 4, 5, and 6 (Meissirel et al., 1997). These neurons are retinoscopically organized across each layer and form a precise retinotopic map. This means the visual field projected initially on the retina is also projected here on the LGN (Derrington, 2001; Itti et al., 2005). In addition, a third and less-studied retinal ganglion cell called the K-Type (NonM-NonP) cell passes information to the striates between the six LGN layers - though the exact information carried to these areas is still not entirely known. Given its size and placement deep within the thalamus, much of what we know of the LGN is based on non-human primate experiments, and although its complete function is not entirely understood, it is clear a primary function of the LGN is contributing to luminance contrast for better edge detection and carrying signals into the visual cortex in a linear fashion (Nassau, 1998; Itti et al., 2005; Müller-Axt et al., 2021).

the visual cortex

The visual cortex, located within the occipital lobe of the cerebral cortex, is where visual processing occurs. Retinotopic and spatiotopic maps are found throughout the visual cortex (Wandell et al., 2007). The retinotopic map, now a product of successive transformations starting at the retina, is mapped across neurons in the visual cortex (Blasdel, 2001). Likewise, spatiotopic maps are mapped across neurons but represent spatial coordinates, describing locations independent of where the retinas are focused (Turi & Burr, 2012). The visual cortex is divided into five areas, commonly called V1-V5 (Huff et al., 2021). Together, each area contributes to the recognition and classification of visual stimuli. Neurons in lower visual areas (V1) respond predominantly to edges and lines. These neurons feedforward to neurons at the next stage of the hierarchy, which code for more complex features, and by V4, V5, and beyond, neurons are selective for basic shapes to full objects (Herzog & Clarke, 2014).

early-stage visual processing

Early-stage visual processing is that which happens in the first two to three levels of the visual cortex. Studies with non-human primates have shown that up to 40% of visual processing occurs within areas V1 and V2 (Lennie, 1998). The primary visual cortex (V1) is the first region within the visual cortex to receive and process visual information. V1 is divided into six distinct layers, each comprising of different cell types and functions, with the signals inbound from the LGN arriving at level 4. Neurons in V1 have receptive fields tuned to stimulus color, location, the orientation of edges, and spatial frequency, all of which form the basis for edge detection in pre-attentive processing (Stevens, 2015; Huff et al., 2021). Information leaving the primary visual cortex feedforward to multiple areas but mainly to V2. Here, like V1, V2 is tuned to simple object properties but improves edge detection by specializing in more complex stimuli such as figure/ground boundaries, motion, angles, curves, and non-cartesian gratings (Hubel & Livingstone, 1987; Itti et al., 2005; Anzai et al., 2007). Information leaving V2 divides into the dorsal and ventral streams where the processing of object recognition (dorsal) and spatial and visual-motor skills (ventral) take place (Huff et al., 2021).

perceptual organization

The human brain, guided by the eye and the visual cortex, is a massively parallel processor attuned to patterns in its visual field (Ware, 2021). Therefore, perceptually related primitives within this field will be grouped and processed pre-attentively. The first serious attempt at understanding these pre-attentive groupings resulted in what we know today as the Gestalt laws of pattern perception (Ware, 2021). Though not without controversy or detractors (the structuralists), the nascent principles of perceptual organization, that "wholes are different from the sum of their parts," postulated by Gestalt psychologists remain valid and practiced today (Wagemans et al., 2012). In their work on the perception of wholes and their component parts, Pomerantz et al. expressed, "...the position we have taken here is that wholes are perceived by their emergent features which are not the parts themselves but rather stem from the intersection of these parts" (Pomerantz et al., 1977, p. 434). Further, regions connected by uniform visual properties such as luminance, color and motion, strongly tend to be organized as a single perceptual unit (Palmer & Rock 1994). The original grouping principles consisted of proximity, similarity, common fate, good continuation, closure, symmetry, parallelism. Later, synchrony, common region, element, and uniform connectedness were added (Wagemans et al., 2012).

design review

The World Bank Group works in every central area of development. Their policy on access to information has made them a leader in transparency and changed the way major institutions make information available to the public. Consequently, an interface that organizes such a massive amount of data would need to be designed to support pre-attentive perceptual organization else risk confusing and burdening the user.

common region

The principle of common region states that elements will be perceived as grouped if they are located within a common region of space, i.e., if they lie within a connected, homogeneously colored, or textured region or within an enclosing contour (Palmer, 1992). As shown in figure 2, using a vertical line with a heavier thickness divides the interface into two main spaces.
Figure 2: Common region used to separate two main regions
Dissecting the left region further, we can see in figure 3 that horizontal lines divide this space into additional common regions.
Figure 3: Showing common regions created with horizontal lines
Figure 4 shows the use of hue to create alternating vertical columns of common regions and conversely, the horizontal lines in the data table create a common region for each row to tie the data together from left to right.
Figure 4: Using hue to create common regions
One area that could be improved upon, as shown in figure 5, is the use of superfluous lines to create regions of no real value and serve to separate the table from the functionality that controls it.
Figure 5: Showing superfluous lines that create unnecessary regions

proximity

The principle of proximity states that shapes, objects, or design elements located in close proximity tend to be perceived as a group (O'Connor, 2013). Figure 6 shows how white space can promote proximity and organize buttons that share related functionality.
Figure 6: White space used to create an adjacent common region
Likewise, figure 7 shows the proximity of table filtering categories establishes a relationship that these elements share a commonality in some form, in this case, their impact on table data. Nuancing this area even further, the proximity of the tabs shown in figure 8 creates a relationship that this area is a "family" and will affect the same area.
Figure 7 & 8: Proximity of filters and tabs create a common relationship
An area that creates ambiguity caused by proximity is shown in figure 9. The proximity of the table title and the “meta data” dropdown has compressed the feedback text causing a noisy and unnecessary relationship between the elements. Adding white space between these elements would help to create the proper, independent, relationship.
Figure 9: Showing compression of screen elements

similarity

Similarity states that we tend to group shapes, objects, or design elements that share some level of similarity in terms of color, tone, texture, shape, orientation, or size (O'Connor, 2013). Figure 10 shows that even though each button is a different length, their similarity in shape reinforces their relationship. Interestingly, a subset within similarity referred to as “anomaly” allows objects within groupings to stand out. As the arrows indicate, hue is used to create diversity within the group, alerting the user that something within is different.
Figure 10: Showing similarity and anomaly

similarity

Something that might be improved upon is the similarity between page title and table title. As shown in figure 11, treating the table title so it didn’t share the same hue, size, and relative proximity of the page title will help lessen the chance of creating a false relationship.
Figure 11: Similarity of text could cause a false grouping

alignment

Alignment creates a construct that indicates a direct relationship. "It can be expected that element alignment along a particular orientation will increase the odds of a global grouping along this orientation" (Claessens & Wagemans, 2005, p. 1450). Due to the amount of content within data tables, proper alignment creates continuation and constructs a simple path for the eye to follow. In figure 12 we can see how the alignment of text, aided by common region, creates a relational construct down each column. Additionally, at the top of the table, the horizontal alignment and center justifications of column headings sets them apart from the rest of the table, effectively squaring off this area to be interpreted as a separate area.
Figure 12: Alignment of column text
Figure 13 shows an alternate view of the table where data-specific dots are connected, creating continuation from point to point even when impeded by other lines.
Figure 13: Alternate view showing graph
One alignment that could be adjusted is shown in figure 14. The alternating pattern caused by the misaligned objects adds noise (distractors) and might serve to break the relationship as these being related to a common task.
Figure 14: Showing misaligned filter fields

conclusion

Understanding all aspects of visual processing is critical in creating mechanisms that allow users to efficiently and hopefully effortlessly attend to the information they require. Paramount to this understanding is the initial, pre-attentive phase of visual processing. Here, within pre-attentive processing, the organization of primitive elements is grouped subconsciously into larger organizational"wholes" that set the stage for an efficient user experience.

references

  1. Anzai, A., Peng, X.& Van Essen, D. Neurons in monkey visual area V2 encode combinations oforientations. Nat Neurosci 10, 1313–1321 (2007). https://doi.org/10.1038/nn1975
  2. Blasdel, G. (2001).Cortical activity: Differential Optical Imaging. International Encyclopedia ofthe Social & Behavioral Sciences, 2830–2837. https://doi.org/10.1016/b0-08-043076-7/03426-4 
  3. Bor, D. (2012). TheRavenous Brain: How the New Science of Consciousness Explains Our InsatiableSearch for Meaning
  4. Claessens, P. M., &Wagemans, J. (2005). Perceptual grouping in Gabor Lattices: Proximity andalignment. Perception & Psychophysics, 67(8), 1446–1459.https://doi.org/10.3758/bf03193649 
  5. Derrington, A. (2001,August 21). The lateral geniculate nucleus. Current Biology, 11(16),R635–R637. https://doi.org/10.1016/S0960-9822(01)00379-7
  6. Healey,C. G., Booth, K. S., & Enns, J. T. (1993, May). Harnessing preattentiveprocesses for multivariate data visualization. In Graphics Interface(pp. 107-107). CANADIAN INFORMATION PROCESSING SOCIETY.
  7. Herzog,M. H., & Clarke, A. M. (2014). Why vision is not both hierarchical andFeedforward. Frontiers in Computational Neuroscience, 8.https://doi.org/10.3389/fncom.2014.00135 
  8. Hubel,D. H., & Livingstone, M. S. (1987). Segregation of form, color, andstereopsis in primate area 18. The Journal of neuroscience : the officialjournal of the Society for Neuroscience, 7(11), 3378–3415. https://doi.org/10.1523/JNEUROSCI.07-11-03378.1987
  9. Huff T, Mahabadi N,Tadi P. Neuroanatomy, Visual Cortex. [Updated 2021 Jul 31]. In: StatPearls[Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan-. Availablefrom: https://www.ncbi.nlm.nih.gov/books/NBK482504/
  10. Itti, L., Rees, G.,& Tsotsos, J. K. (2005). Neurobiology of Attention.
  11. Kafaligonul,H., Breitmeyer, B. G., & Öğmen, H. (1AD, January 1). Feedforward andfeedback processes in vision. Frontiers. Retrieved March 5, 2022, fromhttps://www.frontiersin.org/articles/10.3389/fpsyg.2015.00279/full 
  12. Lang, P. J., Simons, R.F., & Balaban, M. T. (2016). Attention and orienting: Sensory andmotivational processes. Routledge, Taylor & Francis Group.  
  13. Lennie P. Single Unitsand Visual Cortical Organization. Perception. 1998;27(8):889-935.doi:10.1068/p270889
  14. Meissirel, C., Wikler, K. C.,Chalupa, L. M., & Rakic, P. (1997). Early divergence of magnocellular andparvocellular functional subsystems in the embryonic primate visual system. Proceedingsof the National Academy of Sciences, 94(11), 5900–5905.https://doi.org/10.1073/pnas.94.11.5900
  15. Müller-Axt, C.,Eichner, C., Rusch, H., Kauffmann, L., Bazin, P. L., Anwander, A., Morawski,M., & von Kriegstein, K. (2021). Mapping the human lateral geniculatenucleus and its cytoarchitectonic subdivisions using quantitative MRI.NeuroImage, 244, 118559. https://doi.org/10.1016/j.neuroimage.2021.118559
  16. Nassau, K. (1998). Colorfor science, art and technology. Elsevier. 
  17. O'Connor, Z. (2013).Colour, contrast and gestalt theories of perception: The impact in contemporaryvisual communications design. Color Research & Application, 40(1),85–92. https://doi.org/10.1002/col.21858 
  18. Palmer, S. E. (1992).Common region: A new principle of perceptual grouping. Cognitive Psychology,24(3), 436–447. https://doi.org/10.1016/0010-0285(92)90014-s 
  19. Palmer, S., Rock, I.(1994). Rethinking Perceptual Organization: The role of uniform connectedness.Psychonomic Bulletin & Review, 1(1), 29–55.https://doi.org/10.3758/bf03200760 
  20. Pomerantz,J. R., Sager, L. C., & Stoever, R. J. (1977). Perception of wholes andof their component parts: Some configural superiority effects. Journal ofExperimental Psychology: Human Perception and Performance, 3(3), 422–435.https://doi.org/10.1037/0096-1523.3.3.422 
  21. Sherman, S. M., &Usrey, W. M. (2021). Cortical control of behavior and attention from anevolutionary perspective. Neuron, 109(19), 3048–3054.https://doi.org/10.1016/j.neuron.2021.06.021 
  22. Stevens, C. F. (2015).Novel Neural Circuit Mechanism for visual edge detection. Proceedings of theNational Academy of Sciences, 112(3), 875–880.https://doi.org/10.1073/pnas.1422673112 
  23. Treisman, A. M., &Gelade, G. (1980). A feature-integration theory of attention. CognitivePsychology, 12(1), 97–136.https://doi.org/10.1016/0010-0285(80)90005-5 
  24. Turi,M., & Burr , D. (2012, April 25). Spatiotopic perceptual maps in humans:Evidence from motion adaptation. Proceedings of the Royal Society B: BiologicalSciences. Retrieved March 6, 2022, fromhttps://royalsocietypublishing.org/doi/10.1098/rspb.2012.0637
  25. vanWyk, M., Wässle, H., & Taylor, W. R. (2009). Receptive field properties ofON- and OFF-ganglion cells in the mouse retina. _Visual neuroscience_, _26_(3),297–308. https://doi.org/10.1017/S0952523809990137
  26. WagemansJ, Elder JH, Kubovy M, et al. A century of Gestalt psychology in visualperception: I. Perceptual grouping and figure-ground organization. PsycholBull. 2012;138(6):1172-1217. doi:10.1037/a0029333
  27. Wandell,B. A., Dumoulin, S. O., & Brewer, A. A. (2007, October 24). Visual fieldmaps in human cortex. Neuron. Retrieved March 6, 2022, fromhttps://www.sciencedirect.com/science/article/pii/S089662730700774X
  28. Ware,C. (2021). Information visualization perception for design. Elsevier. 
  29. Wolfe,J. M., & Robertson, L. C. (2012). Chapter 8. In From perception toconsciousness: Searching with Anne Treisman. essay, Oxford UniversityPress. 
  30. Wolfe, J. M., & Utochkin, I. S. (2019). What is apreattentive feature?. Current opinion in psychology, 29, 19–26. https://doi.org/10.1016/j.copsyc.2018.11.005