Blog
Whistleblowing

Our new Miss Moneypenny

Share this post
Our new Miss Moneypenny

The term ‘voice assistant’ is so new that Merriam-Webster’s online dictionary does not currently define it. Collins Online, however, defines the term as ‘a voice activated piece of software that can supply information and perform certain types of tasks’. Synonyms for these latter-day uber helpers include ‘chatbot’, ‘intelligent personal assistant’, ‘smart assistant’ or ‘virtual digital assistant’.

According to Forbes Magazine, it is expected that 95% of all customer interactions will be enhanced with AI technology by 2025. The voice assistants we now live with in our daily lives include the well-known SiriAlexa and Google Now (‘Hey Google’), and the lesser-known but just as smart WatsonCortana and Bixby

The earliest known ancestor of the voice assistant was Audrey, born at Bell Laboratories in 1952. Unfortunately Audrey could only understand numbers. At the 1962 World’s Fair, IBM showcased the Shoebox, which could recognise 16 spoken English words. Harpy (named after the mythical Greek creature, a half bird and half woman) was borne in the 1970s by Carnegie Mellon University, and could understand 1,011 words (roughly the vocabulary of a three- or four-year-old). By 1987, Texas Instruments had created a chip for Julie, a talking doll who could carry on simple interactive conversations with children. In the 1990s, voice-recognition software such as Dragon Dictate had developed sufficiently to be able to assist with real work tasks.

As voice-assistance technology has evolved over the past six decades, contemporary research has shown that users’ acceptance of automated service technologies depends on:

  • functional performance (perceived ease of use, usefulness, social norms i.e. what people perceive they should do or not do in certain situations)
  • the ability to fulfil social–emotional needs (perceived humanness, social interactivity, social presence)
  • relational requirements (trust, rapport). 

In a study published in the January 2021 issue of the Journal of Business Research, researchers empirically validated the Service Robot Acceptance Model – that is, acceptance was high for functional performance, social–emotional needs and relational requirements. However, the caveat was that too much perceived humanness was not universally positive in terms of user acceptance. While competence and warmth were seen as the most important characteristics for voice assistants, and interaction in a social manner with a ‘pleasant demeanour’ was also seen as a positive, displaying too much social dialogue (small talk and greetings) produced mixed results, with some users perceiving it as a fake attempt at being human and leading to some user discomfort.

Banner with Speeki logo and Speeki AI avatar Nicole holding a smartphone, captioned

For complaint and whistleblowing hotlines, the use of voice assistants is now a foregone conclusion. Their appeal lies in their:

  • unfailing competence over human fallibility
  • consistent confidential handling of cases with tree logic
  • trustworthiness
  • user-friendly mannerisms (it is no coincidence that the vast majority of voice assistants have feminine attributes).

However for whistleblowers, discussing sensitive issues without being judged by a human may very well be the most important element of the discourse.

Share this post