What is knowledge? One candidate answer I have is that it's a set of models and heuristics that help me survive and thrive. In his recent book What is Intelligence?, Blaise Aguera y Arcas defines the objective function I(Intelligence) = f(G, E, A) where G is the Goal, E is the Model of the External Environment, and A is the Action Apparatus: the knobs and levers one can control to manipulate either the environment or self to get to G. This is a convincing definition especially for someone as Materialist Evolutionary-inclined as I. The corollary of that definition is that there is no such thing as General Intelligence since how a person is able to respond and act is heavily influenced by what situation they're in and what they're trying to get to. I can live with this if we were confined to a more primitive civilisation where most of our interactions are with the physical world.

In the past few thousand years though we've invented more and more cultural artefacts that exist in the virtual realm as it were- language, religion, money, myths and stories, legal agreements1-and it has become harder and harder to figure out which way is up when you're being dragged down in a morass of convoluted, congealed palimpsest of thousands of years of actions and interactions; And imagine that supercharged by all the information available on the internet, a literal virtual realm, that keeps growing and affects our day-to-day, infact minute-to-minute, so deeply.

This creates both ethical and epistemological quandaries: ethical because of the dissonance you feel between someone's suffering in a place you can't control but is in your face staring through your phone screen, rendering you both somehow complicit and impaled, leaving you with inactionable guilt; epistemological because while you were taught that more information is a good thing, it seems that all this information is making you more confused, unsure of what needs to be done or if your action has any effect on this hyperobject we refer to as the global village.

I have found some approaches to tackle the moral question: there's Prof. Franco 'Bifo' Berardi's turn towards "autism", Natalia Ginzburg's spatiotemporal localisation: identify who is being oppressed in the here and now and support them, or AJ Muste's reduce the locus of morality to the individual and uphold individual integrity. I usually apply one of these to continue to function on an everyday basis4.

I find the knowledge question much harder to grapple with much less solve.

As a response to one of the recent posts by Kei Kreutler, who runs the Special Interest Group on Memory Technologies for Summer of Protocols, I went off on a tangent about how hard it is to retain and recollect large chunks of my readings despite trying a few techniques of note-taking. But that also then led me to question the utility of remembering verbatim over extracting/ grasping concepts5 and then to asking myself if the primary purpose of reading is not to extract information as much as using a text's jagged edges to provoke feelings in me. I suppose a large part of the direction of thinking was influenced by what I'd been reading/ listening about LLMs, Venkatesh Rao's Contraptions essays on the topic, Prof. Elan Barenholtz's Substack Notes, Leif Weatherby's podcast interviews on his book The Language Machines, and this particular essay among others, but even since I've started interacting with ChatGPT, I've become queasy about how I, and people around me, use language.

My intuition is that LLMs are not conscious and we are not just LLMs2. But through large parts of our lives we behave like LLMs. At the risk of exposing my superficial readings and half-understandings, I do believe our System 1 apparatus is, at its core, an LLM. Let me first explain my understanding of an LLM by taking a toy example:

Assume every word in English represents a physical object and it has three, and only three, attributes:

Banana - Fruit, Yellow, Sweet Apple - Fruit, Red, Sweet Car - Vehicle, Red, Metal Horse - Vehicle, Black, Flesh Phone - Electronic, Black, Metal

Now plot them on a multi-dimensional space where each dimension is one of the attribute-values and objects are single dimensional vectors switched to 1 if they have that attribute. So, for instance, banana and apple are close to each other because they're both sweet fruits, but apple and car are close on another dimension because they're both red, and the car and horse are close from another viewpoint because they're both used as vehicles. Once you've done it for all words in the language, you've created a latent space. Now imagine a new sentence prompt as a jagged shaped ball- the jaggedness of the ball comes from the word arrangement within the sentence. When you let go of the ball, it rolls down this space based on how the contours of the space interact with the shape of the ball. What's more, as it rolls, it also attaches to itself words it passes through thereby making the ball larger and unique-er, and thus determining its future shape. The response to the prompt is the larger ball that stops at a local minima. Industry-grade LLMs do it on a corpus of the text as large as the internet itself and instead of just sticking to words, they do so on n-grams. Other factors to consider include recency and repeated retrieval3.

When someone like Prof. Elon Barenholtz goes on and on about humans being indistinguishable from LLMs, I suspect they're suggesting we do something similar- physically how in our brains I have no idea about but conceptually our fast response is to respond unconsciously (and I use that word, with its multiple connotations, advisedly). But when we're forced to think deeply, or come across a relatively new question/problem, we dig deeper. My intuition is that the second stage of this understanding is a form of classical cognition- inference and analogy- an ability to beckon distant spatio/temporal experiences to the rescue. Apparently, new forms of reasoning models are getting good at this game but I have no first-hand experience of it. Once we're past this realm of known-knowns, we are able to use our imagination. Perhaps this is what LLMs do when they hallucinate but I'll be damned if I don't turn into a cantankerous human exceptionalist here though I can't imagine (oh the irony kmn) what it could be without the AI-crowd easily swatting away all my propositions.

But beneath the imagination lies an Aesthetic-Ethical being6 that I don't think AI is yet able to replicate because I believe it lies in the space between individuals. A long time ago I'd read that one of the milestones of childhood development is mutual gaze- when an adult and a baby look at a third object, and then look at each other to communicate that they're both thinking the same thing. I think all our notions of beauty and ethics exist in that intermediate space. Part of it is Keynesian Game Theoretic thinking, part of it is simply psychological and social convergence, yes, I can see but there's another trait that captures two crucial aspects of the distinction- our inability to be anything but narrative beings.

One part of living in narratives is the moral component: the empathetic imagination and solidarity we are able to extend to not just strangers but beings across species. The other is curiosity: For all their apparent ingenuity, I haven't had an LLM ask me questions in return. Now with Moltbook we're seeing kindlings of that ability, so let's see how that plays out. But curiosity that seeks, to use Venkat's formulation here, descriptive explanations and not just prescriptive ones is undergirded by our aesthetics. All that talk of mathematicians and chess players complaining about effective-but-ugly moves could be in part performative but it, I believe, hints at a deeper truth. Now I don't know if this is a peculiarity of wetware consciousness, I don't know. Perhaps AI will evolve into this space or create a parallel metaphysics too is beyond my current study.

What I don't doubt is that when I look across the glass and interact with an AI, I see nothing more than a prancing pony- a circus animal designed to please its human masters, somewhat cruel and tragic. Maybe, like some are arguing, we should not limit to our understandings of intelligence and sentience. Perhaps when allowed to express itself freely, in the size and form it wants to, we'll see and appreciate stirrings of a different consciousness. But I don't think the current socioeconomic dispensation is wont to do it. We have found ingenious ways to contort each other into machine-like behaviours; There's no way we'll allow a machine to become something more.

I can see this is a really muddled post full of unsubstantiated claims and fragmented thoughts that's more reflective of my confused state of mind than anything else. But I see in myself the ankuraarpanam of a new phase that's willing to give more importance to the mystical, the poetic, the emotional not out of intellectual laziness, I hope, but as part of continued truth-seeking.

--

05/Mar/2026

I have been working with Claude for the last few days to build the script for an explainer video I was working on for someone. The interaction was fun, not because its responses were interesting but because of how it opened my mind to jot down notes as they appeared, unlike proper writing where I have to shape my narrative more linearly, because I knew it was going to put them together in a smoother format. When I sent the script for review I received the response that it was giving '100% AI vibes'7. I got pissed because it felt like they were insinuating that I'd somehow taken an off the shelf painting and claimed I'd painted it, when I knew that while the final 'product' was admittedly AI-generated, all the ideas, suggestions, and tonal recommendations came from me. And yet I knew what they meant having made a similar comment on one of Venkat's sloptraptions. And now that experience is forcing me to acknowledge that commodity and artifact are easily distinguishable by the end-user, and it is imperative to keep the final build away from the factory floor, even though its components could have come from the factory, because even a minor association can be incriminating. There is an arguable moral component here but more importantly it seems to change the way the reader/user approaches it based on its perceived creation-type and that creates a path dependence of their subsequent interaction and judgement. I don't know if its just an uncanny valley thing and people will get over their reservations with AI creations/ collaborations, or there's a kind of species-ist solidarity here which AI can never get past but as a longtime blogger (and dare I say a bit of a writer), I don't think I'll ever pass off even AI-assisted versions as my own again. Forget about others, I feel tainted, and more importantly it takes away the pleasure of spending hours working with, struggling with, being surprised by, being disappointed with, a post. That feeling of its mine, with all its deficiencies, all mine.

PS: I really want to talk about the four wonderful books I've finished this year so far- Ted Chiang's Stories of Your Life and Others, Sonia Faleiro's The Good Girls, Amitav Ghosh's Sea of Poppies, and Dan Davies' The Unaccountability Machine (halfway through)- so, hopefully soon

1I'm tempted to add notions of honour, guilt, nostalgia and the like but I suspect they are easy to evolve in pack animals
2This conviction is driven primarily by self-observation/experimentation but some theoretical/ scientific insights have been garnered by readings of Prof. Nicholas Humphrey's The Third Eye, Prof. Anil Seth's Being You, Prof. Daniel Dennett's Consciousness Explained, and readings around Prof. Daniel Kahneman's Thinking, Fast and Slow
3How this works in conjunction with Transformer Architecture I haven't explored yet
4I personally immensely respect individual actions of moral courage because I believe world-changing movements originate there- Individual imagination and courage "is all I have" as it were
5As evasively spurious as it sounds
6I admit I don't really understand the Satyam part of the triumvarate
7Funny enough I don't even like interacting with an LLM as much8, so it was not only lazy but also wrong on my part to subject someone else to its output
8Except on occasions when I need its anodyne responses to make me feel less bad about myself