Dear Reader,
A joke I have told in interviews for data analyst jobs goes like this: “If you do statistics and you understand both the methodology and results, you call it Statistics. That’s good, you’re very smart. If you do statistics and you don't understand the methodology but you do understand the results, you call it Machine Learning. That’s good, you’re very smart. If you do statistics and you don’t understand the methods and you don’t understand the results, you call it AI. That’s good, you’re very smart.” I think I plagiarized that joke from someone else, but I don’t remember who.
To understand what AI is you have to understand what language is. To understand language you have to understand what math is. To understand what math is, uhh, good luck. These may help:
Davis, P. J., Hersh, R. (1998). The mathematical experience. United Kingdom: Houghton Mifflin.
Hofstadter, D. R. (1979). Gödel, Escher, Bach. United Kingdom: Basic Books.
I think about it like this: AI is the culmination of a process where humanity has been figuring out ways to do language faster. Language is essentially a collection of symbols and rules about what combinations of symbols are allowed to lead to which new combinations of symbols. As we decide that some symbols are true (map to mutual experience) that helps us use these systems of axiomatic inference to imply that other symbols are true or not true, even though we don’t have direct access to the mutual experience related to those other symbols. For example: If you tell me that you ate an apple in Paris, France, I believe you easily because both those symbols map to things I don’t have a hard time believing in. I’ve eaten apples, I've been to Paris, everything checks out. But if you tell me you ate an apple on a UFO above the mythical city of Atlantis, I am gonna flag that as potentially not true because that series of symbols maps to a concept I suspect isn't in the domain of possible mutual experience. If you adjust your statement to say you had a dream you did an impossible thing, I can accept that easily, as silly dreams once again map to mutual experience.
AI is doing that sort of mental work outside our heads. It’s a statistical mechanism which uses electricity and logic gates to look at the kinds of symbol combinations we tend to accept as valid inferences, and to apply those same rules much faster than we could do so.
I don’t think AI is new, by this definition. All computers, calculators, slide rules, actuarial tables, written documents, and notches on trees have helped us outsource our cognitive processes to things outside our own bodies. No single human can read all the law books, or the code of every website, or every theological rant on every clay tablet and every scroll. We have been grappling with the disorientation of systems of thought being partly outside our heads for a very long time. The difference here is that it’s faster now. We are using lightning instead of ink. We are actively trying to get to a point where the main thinking is in the systems outside our heads, and we are merely auxiliaries to that process.
Is that good? yes.
Is that bad? yes.
Is that awesome? very.
Does anyone have the capacity to really understand the implications? that point has long passed, probably thousands of years ago, so, umm, clearly no.
Can we do something about it? We have never fully understood the sea, but yet we sail.
with love,
Dakota Z Schuck
2025 © Dakota Schuck