Thursday, April 23, 2026

Kant and Artificial Intelligence

This summer's Reading Group now posted HERE


Following work this past spring in C.S. Peirce's logic (Existential Graphs) and theories of logic and applied reasoning (applied epistemology generally), my venture into AI/ML engineering through applied concept development in AI/ML R&D has led me straight back to Kant's Critique of Pure Reason. AI/ML engineers marvel in wonder at Kantian applications to the field as much as they do Peircean ones. 

I continue on in demonstrating to the so-called "real world" sectors of logic-based AI and computer programming the sheer value of philosophy. I swear, if these people had just been open to understanding the value of philosophy in the first place (which obviously includes logic and epistemology), then maybe we wouldn't be in the mess we are in now with AI to begin with.

Anyway, the plan is a summer Reading Group tentatively titled "The Metaphysics of Reasoning: A Formal Systems-Phenomenological Approach to Mapping AI Transcendental Ontology" where, among other things, including excerpts from C.S. Peirce and Immanuel Kant, we'll be reading Kant and Artificial Intelligence (eds. Kim & Schönecker).

Friday, April 3, 2026

"The Model Collapse Hypothesis: Solving the Fermi Paradox Through AI Information Theory"


Is 3I/ATLAS Nonhuman Intelligence?

"The Model Collapse Hypothesis: Solving the Fermi Paradox Through AI Information Theory"

I believe that wondering what is “beyond” AI can actually provide a potential answer to the Fermi Paradox. Intelligences with absolute control over their own development, especially if that intelligence is a starfaring civilization, would likely run into the same problem that AI suffers when it becomes “frail” (a loss of robustness) which is caused by utilizing the same training data over and over. This is a form of model collapse.

A civilization with total technological control over its own intelligence would likely possess the technology to travel from star to star, realizing that remaining homebound on one planet long-term risks civilizational extinction whether from natural disaster, hostile others of some kind, or the inevitable death of its home sun. Yet, a civilization in nomadic space travel for long periods of time would only ever have its own "training data" to use in the constructive advance and development of its intelligence. It would eventually hit a ceiling where all incoming data is only ever a reflection of itself. This means, in turn, that a sort of "data inbreeding" would occur where the informational novelty needed for the spontaneity and surprise of natural evolution would slowly diminish and disappear. (Programmed random surprise isn’t really surprise.)

It's reasonable to assume that such a civilization would realize that the continued advance of its intelligence requires authentic “otherness” – true “ontological difference” and novelty that comes from outward and beyond. They would need to seek out "the Other" in order to be taught something new, feeding the system with fresh training data to keep it robust and acquiring new information.

However, if you are continually searching out novelty in the form of otherness, i.e. civilizations with knowledge and technology different from your own, you would probably realize that such a search comes with a certain degree of existential risk. Not every civilization would be friendly. But those civilizations who are technologically less capable could be perfect sources of the novelty required for your civilization’s intelligence to advance and further develop itself into something new, more encompassing, and more complex. The reason we don't see others is that any civilization in this scenario would likely interact with us only from a distance, or without us knowing it. They would likely practice a form of strategic non-interference, fearing that knowledge of their presence would “taint” the novel datta we have to offer. If the scenario I described above was occuring, I certainly think that that advanced intelligence would stay hidden to protect the purity of the data.