Switching to a 3-button voice setup: how I control my Mac with Handy and VoiceSnippets

DevHack Workflow Automation macOS Productivity
post

A while back, I wrote a post called Stop typing, start talking: How voice dictation changed my workflow, where I shared how I was shifting my workflow to use my voice more and more. At the time, I was still relying heavily on my main keyboard to trigger these voice commands. It was an improvement, certainly, but it wasn’t perfect. I still had to wrangle my fingers into odd positions to hit complex shortcut combinations.

As a developer, I’m always curious about optimizing my physical workspace. I wanted a way to trigger my voice tools without thinking, and without looking down. So, I decided to buy a 3-button keyboard (actually 4, but I only use three).

In this post, I will share how I use a tiny, three-button keyboard to control my entire machine with my voice.

The hardware: three buttons to rule them all

Nowadays, I use a new programmable keyboard with just three buttons (well, technically four, but I only need three to use it). It is the SinLoon Mini Programmable Mechanical Keyboard.

Show image My 3-button SinLoon macro keyboard setup
My 3-button SinLoon macro keyboard setup

The goal is simple: instead of hitting Ctrl+Alt+Shift+Something, I just press a single chunky button.

I must admit, the setup for the hardware has one slight catch. The SinLoon keyboard uses a custom app that you have to download via a Google Drive link. That sounds a bit shady, but it is totally fine. The real downside is that the configuration app only works on Windows, while I do all my development work on macOS.

To map the keys, I plugged the keyboard into a Windows machine, opened the app, and set the shortcuts by clicking the virtual keys in the UI window. Once you send the keybindings to the macro board, they are saved directly on the device. When I plug it back into my Mac, it works flawlessly.

The software: translating buttons to actions

I configured the three buttons to trigger the specific tools that power my voice-first workflow:

  1. Button 1: Triggers Handy, my go-to tool for transcribing voice to text anywhere. It sends Ctrl+Alt+Shift+R.
  2. Button 2: Triggers VoiceSnippets, a custom app I built for running commands and text expansion based on spoken trigger words. It sends Ctrl+Alt+Shift+S.
  3. Button 3: Acts as my Enter key.
Show image VoiceSnippets app to controls your computer with trigger words
VoiceSnippets app to controls your computer with trigger words

VoiceSnippets in action

The real magic happens when you pair a single button press with automation. VoiceSnippets listens for specific trigger phrases and executes pre-defined actions.

Here is how that works in practice. When I want to spin up a project in Visual Studio Code, I press Button 2 and say “Start development”.

VoiceSnippets interprets that and runs the following configuration:

{
"id": "492188f5-4ddd-4f12-ba02-37b1a72eaa36",
"trigger_word": "start development",
"expansion": "<delay Cmd+Shift+Pms> → Focus terminal view → <Enter> → <delay 1000ms> → npm run dev → <Enter>",
"command_type": "Workflow",
"category": null,
"aliases": [],
"workflow_steps": [
{ "step_type": "shortcut", "value": "Cmd+Shift+P" },
{ "step_type": "text", "value": "Focus terminal view" },
{ "step_type": "key", "value": "Enter" },
{ "step_type": "delay", "value": "1000" },
{ "step_type": "text", "value": "npm run dev" },
{ "step_type": "key", "value": "Enter" }
],
"app_filters": [
{ "id": "com.microsoft.VSCode", "name": "Code" }
]
}
Show image VoiceSnippet: Start development command configuration
VoiceSnippet: Start development command configuration

This sequence opens the VS Code command palette, focuses the terminal view, waits a second to ensure the terminal is ready, and then types and runs npm run dev.

I also use text expansion with variables. For example, checking out a git branch:

{
"id": "474bf19d-938f-4893-bfe2-390990bd7c17",
"trigger_word": "branch {name}",
"expansion": "git checkout {name}",
"command_type": "TextExpansion",
"category": null,
"aliases": [],
"workflow_steps": null,
"app_filters": []
}
Show image VoiceSnippet: branch command
VoiceSnippet: branch command

If I say “branch dev”, VoiceSnippets automatically expands it to git checkout dev.

A real-world workflow

When I am answering questions or being interviewed by an AI tool like Ghostwriter, the seamless nature of this setup really shines. Here is the exact loop:

  1. I hold down Button 1 (Handy).
  2. I speak my answer out loud.
  3. I release Button 1.
  4. I tap Button 3 (Enter).

Handy transcribes the text directly into the prompt box, and the Enter button submits it. I can do this process over and over while leaning back in my chair. I do not have to touch my mouse or look down at a full keyboard. I just press the button and talk.

The takeaway

It is remarkably faster to have dedicated physical inputs for voice triggers. Once muscle memory sets in, you never have to look down—you just know the button is exactly where it needs to be, and it will work every single time.

If you spend a lot of time writing, prompting, or executing repetitive terminal commands, I highly recommend exploring a voice-driven approach. Start using your voice today!

Let me know what you think.

Resources


Related articles

Report issues or make changes on GitHub

Found a typo or issue in this article? Visit the GitHub repository to make changes or submit a bug report.

Comments

Elio Struyf

Solutions Architect & Developer Expert

Loading...

Let's build together

Manage content in VS Code

Present from VS Code

Engage with your audience throughout the event lifecycle