This week, we’re taking a look at what it means to define “Voice-first”, a term commonly used to describe various parts of the voice ecosystem. To do this, we asked various members active within the community how they define and introduce the term, Voice-first, to those new to the technology. Whether you’re new to voice, or an existing contributer to the community, check out the list of answers below gathered from the community:
Question: “How do you explain Voice-first to someone unfamiliar with the voice ecosystem?”
“In a nutshell, voice-first means talking to our technology. So the “input” is voice instead of tapping or clicking – or flipping a light switch or turning a knob on a device like your washing machine. The “output” could be audio (which is what most people experience with smart speakers) or a combination of audio plus visual (if you have a smart display) – or, when it comes to interacting with devices, a washing machine turning on to the desired cycle. Voice-first as a means of interacting with technology is exciting for many reasons, most notably for me because it’s so inclusive. It levels the playing field so that anyone who can speak – regardless of technological sophistication, age, health, disabilities, geography, etc. – can interact and control technology equally.”
- Katherine Prescott (Voice Brew)
“Voice-first computing is the next evolution in ambient computing. As we start building applications from the ground up that revolve around voice, we will slowly transform the way humans interact with machines. We can begin creating more efficient ways to get things done, and unlock completely new use cases that were never possible before.”
- Gal Shenar (Stoked Skills)
“Voice-First is an approach to product design and development that assumes the primary interface between users and devices is voice. Similar to the “mobile-first” design philosophy, which optimized websites for mobile and then adjusted for desktops, “voice-first” emphasizes building apps that are primarily controlled by voice and then may support screens and buttons.”
- Brett Adler (VoiceFirstSlack)
“Hands are used to type on a computer keyboard and you see the email in the form of text on a screen. A phone call is you talking into a computer that your friend hears on the other end. Voice-first interactions are where you speak to the computer and it also responds with human speech. Both content in and content out are through sound. You do not need your hands. You can even close your eyes. You simply need to speak and listen. Voice-first denotes that the overall interaction you have with your computer prioritizes voice as the primary form of communication with the computer. This can be built into smart speakers, cars, wearables, and all kinds of IoT.”
- Joan Palmiter Bajorek (Women in Voice)
“It is the next stage of human-computer interaction. Using just your voice to interact with the computers/devices rather than by typing/touching/tapping. Letting go of physical contact with the device and getting things done the most natural way by saying what you desire.
- Ashish (Alexa Developer)
“Voice-first means that the system mainly uses voice inputs and interactions to generate responses. Voice-first interfaces are meant to simplify human to technology interactions with exceptionally improved experiences. For example, during cooking, I ask my Alexa to list a few recipes without having to stop, which saves me time and gives a better overall cooking experience. The alternative would have been to pause the cooking and check my phone for recipes.”
- Joseph Truong (VoiceHero)
We would love to hear what you think. How would you introduce the concept Voice-first tools and devices to someone new to the voice ecosystem? Is there a common theme you see with the answers above? Reach out to us with your thoughts or let us know on Twitter, @JargonJourney.
Looking to start a career at Jargon? Check out these additional resources below: