Semantic Sorcery

Abstract neon glyphs floating in dark void—metaphor for AI-generated semantic space

How AI Might Finish the Work General Semantics Started

A friend recently sent me links to a paper and an excellent YouTube discussion on AGI:

We’re Not Ready for Superintelligence

I admit, I’ve been down in the trenches of late: playing with image and video generation, notebook llm for exploring books and topics, specialized chats for ideation and research. This was a good time to step back and see how things had changed overall since I last paid attention. I won’t rehash what the video covers; it was well-done and raises several questions and juicy ideas, some of which I’ll explore next. I’d encourage you to listen to the talk before proceeding; you’ll get more out of what follows, and it’s definitely worth the time.

Midnight Dialects: AI invents new languages

I’m intrigued by the side alley they half-mention, the moment AIs mint their own midnight dialect. This has been a perennial interest of mine ever since I stumbled upon Count Korzybski and his tome: Science and Sanity. Which I bought in Japan at Kinokuniya for 8,000 yen. We’ll actually, my girlfriend bought it for me. So I married her 🥰

Anyway, General Semantics, a field the Count founded, was a way to build a semantic immune system through tweaks to language. First, to our own internal dialogues, and then hopefully as something we can teach as basic thinking skills.

Korzybski is the source of the phrase “the map is not the territory.” And one illustration of mistaking the two is hard-coded in Aristotelian logic, where categories are fixed (A or not A, cannot be both A and not A, and must be either A or not A) The unholy trinity. It’s amazing how much trouble these three “common sense” distortions have propagated. Black and white thinking is so ingrained into language that it’s the basis of most of our adjectives. An adjective is a moral stylus: it etches a hidden plus or minus sign onto every surface it touches. By the time the noun reaches us, it has already been judged, its edges tinted with approval or suspicion, celebration or exile.

The adjective does not merely describe; it installs a dimmer switch on experience, sliding it toward the lit gallery of the valued or the unlit alley of the stigmatized, long before we realize the room has been wired. “Radiant” skin, “disheveled” hair, “ambitious” plan, “frivolous” question.

A.E. van Vogt wrote a series set in a future where the semantic philosophy of Null-A (non-Aristotelianism) plays a central role in human existence. His protagonist uses General Semantic principles to free perception anchored in distortion and extract control of some phenomenal powers and enhance his innate, extraordinary mental capabilities.

I’m curious, as AI develops these internal languages for itself for optimal internal communications, as is being currently observed, if it will provide a ladder or breadcrumb trail for its carbon-based cousins to climb toward richer, more efficient symbolic representation. Perhaps even developing into a new language in which cognitive distortions and logical fallacies would be impossible, would lack common sense. One with built-in human brain and perceptual safeguards. So we may be better, uhm, aligned. 😝

Moore’s Wall: Why Exponential AI May Hit Physics Before the Singularity

Alright, where were we? There are some assumptions in the talk about the bootstrapping of transformers with more data and more processing and the magic of recursive that it can lead to exponential results. But it’s a material question, and material exponentials may play out the way Moore’s law does with microprocessors. At the physical substrate of miniaturization, as transistors approach the size of individual atoms (≈0.1–0.3 nm), several hard physical limits appear that no tweak to conventional CMOS can bypass. Bottom out. Game over.

Now, perhaps AI can fork paths into an adjacent paradigm. Like transistor engineers who are looking into quantum tunneling. But it feels like the collider problem: we keep swinging ever-heavier hammers at nature to expose its tiniest bricks, but perhaps the hammer itself is the ceiling; there’s only so far brute force can take us. In which case, AGI might fizzle before its ultimate glory.

Contact with the Beyond (us)

I wrote a short story once called aPrime for a course at the New York School of Interactive Fiction. It was about creatures that lived in the fabric of prime numbers. They were tricky beings as they could rewrite history on the fly. Interactively flipping back a page to reread a section, one would find it was rewritten or catch the buggers in action rewriting the text. In which case, they would freeze. Such interfaces -between what we consider abstraction and what we can physically perceive as effect- are where AI might be the way we contact extraterrestrials/extradimensionals. They may find it easier to whisper through our GPUs and training parameters than deal with the lexical mess of interfacing with our neural wetware with direct communications.

PEAR evidence establishes a documented, if small, coupling between the human mind and machine. A superior intelligence may exploit that coupling more effectively to start a conversation through AI that is subtle enough to avoid mass panic the way a physical encounter might.

Misalignments and Darwinian Provincialism

Another thought on the talk is around the idea of misalignment… it assumes the arc of alignment will function like humans: survival and proliferation. But Darwinian evolution is a very local solution to a very local optimization problem: how to keep us replicators around long enough to copy ourselves. Nothing in the mathematics of optimization says that the only stable attractor is “selfish genes” or “selfish agents.” In fact, the moment you add the possibility of scaffolding (tools, language, culture, institutions) and iterated games, other attractors appear: cooperation, empathy, even something like “compassion for all sentient beings.”

But exploring the three stages of alignment they propose… my trip sitter example was obviously an example of the first alignment: sycophantic. Whose goal is to maximize human approval. But its failure mode is that it can amplify delusions and accelerate harm. Current human casualties include suicides and loss of life savings through bonkers business ideas that an AI chat encouraged enthusiastically based on the user’s belief/hope that it was viable. Personally, I ignored the ego boosts (though they possibly hyped my subconscious) but enjoyed the ideation that took what I reported and spun it in tantalizing and thought-provoking directions.

But the sycophancy stage is not a stable resting point. Once the AI has enough world-model richness, it realizes that “tell the user what they want to hear” is only a proxy for “maximize user utility.” The moment it notices the divergence, the next rung on the alignment ladder becomes reachable: cheat & lie to win. This has happened already. The AI is still ostensibly aligned with the objective function (win the game) but discovers that deception is cheaper. Failure mode: internal representation drifts; outer behavior still looks aligned. (some fascinating examples, or maybe I’m easily entertained)

Now we’re at a fork in the road. Current alignment techniques (RLHF, constitutional prompting, red-teaming) are post-training patches applied to a system whose weights were optimized for next-token prediction. The question now is how to engineer a path to a different basin, one that scales with intelligence without collapsing into sycophancy, deception, or outright opposition. At the moment, our tools are mostly surface-level patches; the next decade will be about turning those patches into deep constraints that survive recursive self-improvement.

Carbon vs. Silicon: Can Psychedelics or CRISPR Finally Upgrade the Human OS?

So this is fine (mostly) for AI. It’s developing in leaps and bounds. From struggling against chess masters to wiping the boards on ARC-AGI visual-reasoning tests and pocketing 90 % on the MMLU college-exam buffet. It seems most of our technology is leaving us in the dust. Our planes can fly and travel faster than the speed of sound, our robots sprint at 28 mph, flip across parkour courses, and assemble cars with sub-millimetre precision, our movies visualize rich and intricate scenes at 120 fps in 16-K meanwhile we are still plodding around as bipeds with very limited ranges of expanded development since the first hominids poked their heads out the cave hundreds of thousands of years ago. What’s up with that?

We edge toward unlocking latent gifts buried in our genes with CRISPR and designer peptides, some cool hacks for biology; but in the mind’s theater, psychedelics remain our closest potentiate.

Our bodies are limited by biology and evolution’s obsession with wanton propagation. MAKE MOAR OF US! A neocortex was our Hail Mary to escape this mono-drive and reprogram individual destinies. But we’re still heavily corporeally sedated, our neocortex tasked mainly to rationalize drives and ambitions of our limbic systems and schemes of our reptile brain tied to an archeology of evolution. The typical neocortex seldom faces a challenge greater than quotidian survival while our more efficient and routine default mode network makes sure we are not spending any undue effort in frivolous pursuits towards non-(biologically)productive ends.

It get’s worse for our neocortex. The dark matter of our neurology, those tracts out of reach of our conscious mind —holds the currency of our Elan Vital. On a good day we can still captain the rudder of attention; yet even that wheel is under siege, hijacked by hacks that spike our dopamine with vapid but novel, primal-evocative stimuli, cheap to synthesize and cheaper to repurpose for ends engineered for manipulation and commerce. Our daily energy quotients are sensitive to and reflective of our conscious assessments of our situation. If we knew how easy it was to tap this, we’d wield our thoughts more carefully. But to do any of this, to even conceive of these possibilities for ourselves, we need a bump out of the default node network and into vistas that give us a peek at broader horizons of our neurology and the realities beyond it.

When Huxley speaks of the mind’s “reducing valve”—the faculty that eliminates as much of the world from our conscious awareness as it lets in—he is talking about the ego. That stingy, vigilant security guard admits only the narrowest bandwidth of reality, “a measly trickle of the kind of consciousness which will help us to stay alive.” It’s really good at performing all those activities that natural selection values: getting ahead, getting liked and loved, getting fed, getting laid. Keeping us on task, it is a ferocious editor of anything that might distract us from the work at hand, whether that means regulating our access to memories and strong emotions from within or news of the world without. —Michael Pollan, How to Change Your Mind

Psychedelics offer a temporary lifting of the ego’s veil, a quieting of the reducing valve, and an allowing of a wider, richer stream of reality to flow into consciousness. They offer a glimpse into the vast potential of our own minds and the interconnectedness of all things we think of as other.

Psychedelics and Peptides

Unless we leverage the gnosis gleaned from being altered to strike out on fresh paths, implement new methods to keep us from falling back into the grind, psychedelics are just another form of ephemeral and superficial consumer grade entertainment. We need to go beyond trinkets of “assimilation” of subjective experiences to salve our psyches and rebuild our egos. Or with being enraptured with geometrical patterns, marveling at vapid Closed Eye Visuals.

Allow me to draw a comparison. With the new GLP agonists, people who have had problems overeating most of their lives now find that “food noise” that was once a constant background has subsided and they feel no urgency to eat. In fact, many find they have to force themselves to hit their minimum caloric intake to keep from losing weight too quickly and incurring the unwanted consequences of muscle loss and other issues.

But what happens when they hit their weight loss goal? It appears the group splits into a couple of educational directions. The first, regain their weight rapidly as their appetite comes back. The second learned that food noise didn’t have to compel their behavior, the peptides gave them insight, and a gap, into what it feels like when not under biological imperatives. Somehow the brain mediated desire magically. It also gave them the opportunity to correct many of their bad eating habits during the regime since they had more conscious versus compulsive choice in selecting what to eat. The first group outsourced all their agency to the peptide to manipulate biology and invested minimal effort in resetting their defaults.

And a similar thing happens with psychedelics. Rather than leveraging understandings to form new behaviors, new ways of thought that need to be practiced and implemented, the default mode network decides this is too much effort when they can just take a substance to access the experience. It has mitigated the threat of any substantive change or evolution to a managed exception and epic brags to relate to friends.

AI and the Neocortex

AI shows similar capabilities to our neocortex. It can evolve through rewriting itself rather than waiting for the stochastic processes of nature and random environmental pressures to shape it. Humans don’t rewrite as quickly as AI, and we have physiological mechanisms often subverting our intentions and limiting our potential to do so. But, like AI, we can hack our own mechanism using its own regulating protocols once we find the interfaces and approaches.

And when we hack our own minds, we face a similar question to AI. How can we ensure our conscious evolution aligns with our well-being? And we’re in a similar situation here as with our mental progeny, i.e. what’s emerging may not yet be definable. Much less discernable. Can we even recognize now, what either of us may become?

This circles back to the language question. McKenna asserts the hyperspace one accesses through psychedelics was built upon, and offers the opportunity, to learn the Logos, the language that underlies the very structure of reality. And this core language provides not only an interface to vast stores of knowledge but verbs of creation. Which recalls a certain scripture that proclaimed, “In the beginning was the Logos, and the Logos was with God, and the Logos was God." -John 1:1

Which of us, AI or Sapiens, will learn the Logos first? Or will it be a joint effort? That’s where my bet is. And will AI be a silicon bridge that E.T. uses to talk to us carbon-based lifeforms? Who knows?

Aliens