First of all, what is a digital sense?
Digital Senses Theory
Digital data is discrete: each bit is a sample; run enough samples fast enough, and it creates the illusion of continuity. We can watch an 8K video stream of low tide and understand it is real by the illusion of continuity—but it is not. It is a fantastic orchestration of light and bit logic.
Our data is continuous: where does a smell end? We stream data continuously with almost no effort.
To illustrate this concept, here are two graphs.
The first graph depicts a continuous signal. The second graph depicts the same signal sampled about 9 times.
The discrete signal contains a lot less information; but at the same time gives us a good idea of the original signal without sampling ad infinitum, as the first graph would require to Perfectly recreate it in digital form.
The only way to replicate the continuous signal—or copy it—is to use an analog method; where the receiving medium takes on the physical properties of the signal—like a vinyl record. This method has no sample rate limit, but its operations and dynamic range are limited by the medium.
You can’t perform operations on analog data without modifying the medium. We can change the way a record sounds by slowing the RPMs or scratching it like a DJ; but we can’t throw auto tune on it without a digital analog converter.
My theory is this: AI could be the DAC for our senses.
Our digital senses evolution is limited by state-of-the-art discrete Boolean signals:
- “Like” or don’t
- “Retweet” or don’t
- “Comment” or don’t
It’s the lack of rich sensory data that’s making the internet a terrible place to be.
As it stands, we don’t ask much of our senses and frontload a ton of discrete signals to the pre-frontal cortex; resulting in fatigue, stress, anxiety and a framework of the world constructed of shallow data sets—like Likes. Since we don’t have all the information, we desperately extrapolate bits of data and cling to misinformed conclusions.
Rich sensory data doesn’t have to mean more data. As long as the signals are discrete, larger quantities of data won’t produce a sensory experience—aside from predicting the next time you will sneeze, more data can only increase the confidence of predictive models, for better and worse. Predictive models can match an ad with a user, but can’t help us feel the presence of someone else digitally.
Imagine stepping outside
As you step outside: instead of 37th Street, you just see Minecraft. That is how our senses feel right now.
We see each other, but we don’t feel each other. When I say feel I mean presence: full color, fabrics, soundscapes, the smell of fresh cut grass and two stroke engine exhaust. Our digital sensory sample rate is very low, which translates to an experience akin to listening to a crappy Limewire download from 2000.
Continuous signal processing
Can you tell if someone is lying over Zoom? I don’t know. But it’s much easier to tell if someone is lying when they are standing directly in front of you.
Expanding on this example using the discrete and continuous frameworks:
Discrete Lying Signals
- Too much eye contact
- Not enough eye contact
- Nonfactual information
Continuous Lying Signals (as inferred thoughts)
- They don’t usually act like that.
- They were acting strange.
- It felt wrong.
Perhaps, you can see where the discrete signals would fail. It doesn’t take into account an individuals wiggliness.
We can say:
- Hey, that wiggly line sort of looks like this wiggly line…
But to say:
- That wiggly line is just like this wiggly line, so they are a match.
- I predict they will behave the same way and express the same properties.
The latter is a decision, which when deployed can cause some damage to the medium (us)—but the latter can also predict which brand of toothbrush you prefer and when you might need one ($$$).