The growing technological sophistication of smartphones has transformed them from merely communication devices to essential companions that we interact with throughout the day. With people’s growing use of virtual agents and voice assistants on their smartphones, the devices are not only acting as intermediaries that connect users from diﬀerent places, but also communication terminals that present and react to diﬀerent combinations of social cues.
Previous research on mobile media has primarily focused on how users treat these devices as communication channels, but more recent studies have begun to concentrate on the direct interactions between users and phones. Kun Xu, University of Florida College of Journalism and Communications Telecommunication assistant professor for emerging media, wanted to extend this field of research by examining the eﬀects of two social dimensions of smartphones –language and modality — on social responses to these devices.
In Xu’s study, participants received messages from a voice-assistance mobile app (think “Siri,”). He compared anthropomorphic (having human characteristics) language and vocal cues to non-anthropomorphic language and text cues to see which had a bigger impact on users’ social responses to smartphones.
Anthropomorphic messages used active tense, self-referential statements such as “I” or “my”, informal conversation speech such as “it’s”, “like”, or “well”, and exaggeration of tones, for example “I like it soooo much,” and “Super!”. Non-anthropomorphic messages used passive tense, non-self-referential statements and formal statements.
Results suggested that there were no significant differences between the anthropomorphic and non-anthropomorphic language cues or between voice messages and text messages in evoking social presence, perceived attraction of the phone, perceived trustworthiness of the phone, conformity intention, and users’ actual conformity.
However, the research did show that anthropomorphic cues had less effect on the most active users of mobile media. The longer time people spend on mobile media, the less sensitive they are to the effects of the anthropomorphic elements.
In addition, for those who used smartphones more innovatively, efficiently, and thoroughly, voice-based messages were more likely to lead them to perceive the phone as a social actor, be attracted to the phone, trust the phone, and be willing to comply with the instructions of the phone. Those who only relied on the routine features of the phone, text-based messages were more likely to trigger these social responses.
Theoretical advancements gleaned from this study could be applied to further study on mobile interface design, education, and mobile intervention. Embedding more humanlike speech and tones in smartphones could help those who are not as comfortable using the technology, such as the elderly. This could help them with safety concerns as well as help them learn to use new technology.
The original research, “Language, Modality, and Mobile Media Use Experiences: Social Responses to Smartphone Cues in a Task-Oriented Context,” appeared in the Telematics and Informatics, Jan. 11, 2020.
This story originally appeared on UF College of Journalism and Communications.
Check out other stories on the UF AI Initiative.