URL:
I’ve been reading this week about how humans learn, and effective ways of transferring knowledge. In addition, I’ve also had AI in the back of my mind, and recently I’ve come to the realization that not only is our industry building AI tools poorly, we’re building them backwards.
First: How we learn. My favorite (evidence backed) theory on how humans learn is Retrieval Practice.
The short of it is that humans don’t really learn when we download info into our brain, we learn when we expend effort to pull that info out. This has some big implications for designing collaborative tooling!
This is one of the reasons why writing, summating, and all the things that people are using AI as crutches and doing the work for...aren't learning. Myself included. And why I am working on writing and synthesizing more. Feeling it in practice, very helpful to learn about a tldr;1 about it.
Second: What we learn. It turns out, the “thing” that we learn most effectively is not knowledge as we typically think of it, it’s process.
What's fun here: the process is the point! Been a saying of mine for a few years now. Hah.
We’re really good at cumulative iteration. Humans are turbo optimized for communities, basically. This is why brainstorming is so effective… But usually only in a group. There is an entire theory in cognitive psychology about cumulative culture that goes directly into this and shows empirically how humans work in groups. Humans learn collectively and innovate collectively via copying, mimicry, and iteration on top of prior art.
So, combine all of those bits of information together, what do we get?
- Humans learn and teach via process
- Processes need to take a goldilocks amount of effort to be effective
- Cumulative iteration > solo developer problem solving
- We build tools to help us think, not to think for us
So, I’m going to walk through one of the anti-patterns I see in AI tooling and fix it by taking an evidence-based teaching process and imagining it augmented with AI. The teaching process, by the way, is: Explain, Demonstrate, Guide, Enhance.
Making humans chew through a zillion tokens in order to get a simple task done is a great way to take your friction-reducing interaction and turn it into a friction-introducing interaction.
As a bonus: it helps people observing learn via osmosis, even if they’re not actively involved in taking actions. Also, did you know there’s actual real support for the idea that humans learn at the sub-action level just by observing? It’s not necessarily the primary mechanism, but it contributes to the propagation of said knowledge and helps spread “how we approach doing” throughout teams very well. Humans are so neat, seriously.
...there are a general set of principles here:
- Reinforce human learning
- Help humans work better together
- Accelerate human execution in-process, don’t remove it
- Never go from blank to outcome
- Tools should take the right amount of effort to use
- Incorporate team learning into the tool’s output
Systems tooling is ripe for revolutionary changes in how they’re imagined, how they’re implemented, and how they’re valued. But those changes will never materialize if we don’t build them to be human-first. Don’t just keep humans in the loop, remember that humans are the loop.
- tldr; is short for Too Long, Didn't Read. It's tech-speak for what could also be called a summation or even a nutgraf. ↩