By far one of the most interesting, and creative, game mechanics we’ve encountered over the course of this semester is Listen Mode in The Last of Us. By holding down Joel’s shoulder, the screen’s visuals fade in saturation, near black and white in some settings, and typical noises such as the droning on of Ellie’s voice are stifled to soft muffles, but all bodies gain a white halo with animated white sound waves emanating from wherever they make sound. This combination allows Joel, and the player, expert-level spy abilities, allowing for better spatial awareness and strategic planning. Listen Mode is particularly successful because it mirrors sensitive biological/psychological/neurological behaviors of the human body, with which players are, even if only subconsciously, attuned.
One of the most straightforward relationships drawn by this mechanic is between Listen Mode and selective auditory attention (SAA), or selective listening. According to current understandings of sensory processing and attention, all auditory information of appropriate distance is registered by the ear, but they are constantly screened for importance as they enter the prefrontal cortex. Those deemed relevant or necessary for survival are selected for attention and conscious processing, and others are only present in working and short-term memories for less than 30 seconds and gradually replaced by new auditory signals. This is an evolutionary advantageous skill, as it allows a person to focus on the task at hand, especially in life or death situations, without irrelevant and distracting information constantly disturbing him. The designers of The Last of Us embed this natural process into Listen Mode as Joel’s focus shifts from Ellie’s voice to the shufflings of The Infected or other enemies. Although all programmed sounds are constantly present, but the balance of their clarity and volume are adjusted as Joel assumes a listening position.
I also found this mechanic extremely synesthetic; as Joel expertly listens to and registers the sound of nearby bodies, the information is translated to the player as visual auras depicting position, amplified by the desaturation of other elements on the screen. In this way, sounds are primarily seen, not heard. This starts to break down the clear delineation of the biological senses and constructs a conversation between them, striking a balance of player experience. In Listen Mode, hearing is deemed more important, and therefore seeing must be devalued to an extent, figuratively, but, according to the developers of The Last of Us, literally as well. Although the figurative representation created in Listen Mode is fictional, it doesn’t disrupt gameplay because it is still rooted in biological sensations of the human body.
To branch slightly from the strict physical laws of the world, I think that the synesthesia of Listen Mode sufficiently introduces fantastical elements into gameplay without steering too far from reality. The Infected are somewhat less than human, and to balance that, the player is granted a slight “super-humanness,” satisfying Clark Kent and Peter Parker childhood dreams all over the world. But, because it is built from natural senses and sensations, it can be accepted without argument and incorporated seamlessly as a useful tool of gameplay survival.
These small, but smart, decisions made by the developers in creating biological counterparts within the Listen Mode mechanic contribute to The Last of Us’s natural and smooth gameplay. Did you discover other relationships between Listen Mode and real life behaviors? In what ways did they, or your un/awareness of them, affect your gameplay?