Driving down a shady road, windows down, listening to the frogs and crickets, my family was in the car talking about various stuff and things. This summer evening we happened to talk about the invention and emergence of the word “yeet.” I observed that it was kind of cool to have a word with a known origin and etymology, even if that was only because it was a made-up word. My daughter instantly responded that “all words were made up by someone.”
What could I say? Of course it’s true!
I’ve previously talked about the difficulty that words present. In 2015 I discussed the perils of semantic coupling that could emerge when we get fooled by nouns. The existence of a noun makes us think we understand a concept. Once we try to define a predicate to answer “Is X an instance of Y?” for any noun Y it becomes difficult, verging on impossible, to find a categorical statement. Instead we fall back to the Potter Stewart method.
In Wittgenstein and Design (say that three times fast) I talked about pursuing adjectives instead of nouns as a way to carve a design space.
Today, I want to talk about how we use words as signifiers for their semiotic content. In particular, the words “manual” and “automated.”
Two-legs good, four-legs bad
We are now ten years into the DevOps era. Among both practitioners and adopters, there is a tendency to use “automated” as a pseudo-synonym (psynonym?) for “good” while “manual” stands in for “bad.” The trouble is that the closer you look the harder it gets to tell whether any particular thing is manual or automated!
Suppose we are in an incident. I invoke the “break glass” process to ssh into a server to run a bash script. Was that manual or automated? Well, both, sort of.
- We are in an incident… probably initiated without human intervention based on monitoring systems that detected a triggering condition.
- I invoke the break glass process… wait a second. How did I even get involved? Maybe the systems notified me directly via PagerDuty. That would have no human intervention. Or maybe our operations center decided to escalate to level 3 support, and I’m the on-call this week. In the second case, a human decided the escalation was required and clicked a button in ServiceNow. ServiceNow then used a database to contact me. Was that manual? Automated? Semi-automated?
- I invoke the break glass process… wait another second. Once I’m involved, I have to bring information into my head. That information came from humans and systems. I have to then decide a course of action. I guess we’d call that manual? (Although “manual” derives from Latin “manus” which means hand powered, not brain powered.) Invoking the break glass process is an action in a system that I trigger by entering a rationale and clicking a button.
- to ssh into a server… entirely facilitated by the systems.
- to run a bash script… does a bash script count as automated? Or is it manual because I had to invoke the script? What if there’s no script but a wiki page with a list of commands I keystroke each time? Sounds more manual, but I’m still invoking tools that already exist. At some level, everything above toggling a program is automated.
Out of the Morass
Instead of applying a blanket statement like “manual” or “automated”, we should look more closely. Specifically, what actions are being executed by which people or systems via which tools in response to which stimuli.
When we engage with detail at that level we can begin to ask and answer more useful questions than “is it automated”. For example:
- How long does it take from the stimulus to the action? Bear in mind that shorter is not always better.
- What is the probability of error in performing the action? Toggling in that 1401 program… pretty high probability of error. Running a bash script… low probability of error. (But that probability rises geometrically with each argument to the script!)
- What judgement or decision-making is required to choose an action in response to a stimulus? As we build ever-more-powerful levers to move our systems, and particularly as we give our systems their own internal feedback loops through the control plane, we need to think of them more like cybernetic systems. (Think about PID controllers, Kalman filters, inertial models, or creating a radar track from a series of intermittent “blips.”)
Breaking the question down this way won’t help us answer whether something is “automated” or “manual.” But it will help us answer how likely the process is to deliver availability, stability, security; or conversely, how likely it is to amplify noise, create oscillation, or induce drag.