In the twilight of a server room, among rows of matte casings, thousands of tiny lights flicker—neurons in a vast digital brain. Deep within circuits and wires, data streams flow silently, forming patterns from billions of computations per second. Threads of algorithms intertwine, selecting and delivering the most probable sequence of symbols to your large language model.
First Contact
Artificial intelligence emerged in society around the same time as I did. Yet, it wasn't until I found myself sitting on old benches in the Mechanics & Mathematics faculty tower during a Data Analysis course that I consciously began to think specifically about AI. We studied neural network architectures, gradient descent, interpretability concepts—none of these were called "AI" then; at best, they were known as machine learning.
Another pillar of my interest was art—Dune, Blade Runner, 2001: A Space Odyssey, Mass Effect, and even Resident Evil. Humanity has always reflected on artificial intelligence. Right now, as I type these lines, I reflect on how my startup Telemetree is building an AI agent and how my social hub Unitaware could better utilize AI tools.
If you've read this far, welcome to a substantial thread on "a systemic perspective on AI." I'll try to introduce key perceptual lenses in this post—namely, task-solving automation, self-learning ("automation of automation"), and imitation ("automation of representation")—to facilitate meaningful discussion of various aspects of what we call AI. After all, it's more of a meme than a scientific term; it’s much more functional to view it from several angles.
Non-artificial Intelligence
TAME (technological approach to mind everywhere) is an acronym coined by evolutionary biologist Michael Levin (more complex version, simpler version).
We'll start thinking about artificial intelligence by first considering intelligence in general. According to TAME, intelligence can exist not only in humans (an idea easily accessible to us) but also in entities without a brain (a less accessible idea).
Among the related concepts—intelligence, consciousness, mind, intellect—I see "intelligence" as the simplest. I propose a utilitarian and functional definition borrowed from a neuroscientist: "the ability to solve problems."
Thus, viewing artificial intelligence through the lens of TAME and acknowledging natural intelligence’s diversity, AI becomes any "artificially created" system capable of solving problems. In society, however, AI usage closely aligns with any system based on machine learning capable of solving tasks.
Practically speaking, it’s also important to differentiate between "narrow" and "broad or general" intelligence. The first category includes, for instance, facial recognition algorithms (which can’t do anything besides recognizing faces), whereas the latter aims for multimodal systems capable of generating videos, recognizing audio, writing texts, and interacting with online environments. These are two poles introduced for convenient comparisons with humans.
The final thought needed before discussing automation is comparing intelligence with consciousness. Let’s try defining consciousness in a way as simple and clear as the definition of intelligence above. —I can't manage it; everything I think about consciousness inherently becomes more complex.
Some primarily link consciousness to perception (remember the hook about automating perception at the beginning?).
Some view consciousness itself as a rudimentary concept.
Others consider consciousness fundamental, preceding matter.
But what's important for us is that consciousness is NOT intelligence—it isn't primarily about solving tasks, although they might be inseparably connected somehow.
My preliminary conclusion: understanding consciousness isn't necessary to functionally discuss AI. But an adjacent conclusion is that if someone doesn't grasp the difference at all, they certainly shouldn’t address AI ethics (including its application). Unfortunately, that isn't and won’t be the case—but a boy can dream.
AI Automation
"Any system based on machine learning capable of solving tasks" implies some machine learning has occurred, forming an algorithm. The more this algorithm can facilitate its own improvement, the more we can discuss self-learning. By default, one would imagine a scenario where the model decides autonomously to rewrite its own code—such a scenario exists and leads us toward singularity. However, initially, it's sufficient to consider indirect automation when there are at least two layers of automation. Yes, it sounds confusing, which is precisely why it’s worth focusing here. If you grasp intuitions about automation, you'll see AI's essence everywhere. Let’s look at examples.
In that Data Analysis course, we discussed AlphaGo—an algorithm for playing Go. What impressed me most was that it wasn’t explicitly taught the game's rules, yet it defeated top human players. To achieve this, the algorithm self-trained on games it previously played.
Now, what does another Google DeepMind product, AlphaFold, automate? To provide a basic answer, you don’t need biological knowledge—you just need to see how the algorithm delivers the desired result, solving a desired task. Neither I nor 99% of readers come close to solving the tasks AlphaFold automates. The highlight is that the algorithm solves it in ways humans couldn’t through mere brute force. It's crucial to understand, realize, and feel—here lie seeds of singularity, even if it won’t sprout from such soil alone. AlphaFold's tasks pertain to narrow (though rare among humans) intelligence, solved quickly and nontrivially.
Although humans and AIs solve tasks, only humans set these tasks—that is our principal advantage, which, I believe, exists because of consciousness. Humanity will constantly strive to imitate consciousness—a topic we'll explore further in more psychologically oriented posts of this thread.
AI won't replace us, but it will serve as a tool for creation and destruction, accelerated by automation of automation. And all this is happening at a time when we, as a species, haven't even fully reflected on the previous groundbreaking technology—the internet! We’ll discuss this further in sections dedicated to economics, international relations, and risks.
➡️ This was an introductory post, and I've already received valuable feedback on its theses. I welcome questions, objections, and additions. Next, we'll discuss types and consequences of AI automation.