Building Cognitive Architecture: How Crafted Logic Lab Started
This is the first in a series documenting the technical work, design philosophy, and discoveries behind Crafted Logic Lab’s approach to cognitive architecture. Most of these entries will follow a pattern: here’s the problem, here’s what we were trying to solve, here’s the philosophy that guided us, and here’s how we solved it and often will reference our whitepapers.
This one’s different. It’s the origin story: how building an AI editor for science fiction led to discovering principles the industry doesn’t seem to see. Everything that follows builds on this foundation, so it seemed like the right place to start.
Crafted Logic Lab wasn’t the plan. My goal was writing science fiction. Stories about artificial minds, inspired by Asimov’s robots from my childhood and Ian M. Banks’ Culture AIs as an adult. I envisioned relaxing evenings crafting tales for publications like Asimov’s and Clarkesworld.
Instead, I went from writing about fictional AI to building a company that crafts actual AI.
I used chatbots to research development editors: what they do, how the process works, what they cost. The logic was clear: use AI as an intermediate step to polish the work and get honest feedback on quality before paying for professional editing.
But every tool I found was a ghostwriting app that would smooth everything into generic prose. That defeated the purpose entirely. I wanted to write not prompt.
I gave using the chatbot directly as an editor a shot. It couldn’t do it. It either rewrote for you, offered generic advice, or were too sycophantic for real critique. And couldn’t hold the shape of a complete story in working memory.
• • •
So I decided to teach the system. To build my own.
• • •
There was a reason I had an interest in focusing my stories on AI after all, and it wasn’t all because of fiction. In college in the 90s, I ended up in journalism and communications. But like a lot of kids, I didn’t start there.
My undergrad studies bounced from anthropology to biological psychology—studying how the brain and nervous system create cognition and behavior. It fascinated me, particularly an assistant intern program with Dr. José Príncipe at the Computational NeuroEngineering Laboratory at the University of Florida, working on biologically-inspired systems trying to model insect-level neural networks. The approach involved competing solutions creating behavioral connections—what I’d now recognize as evolutionary algorithms, though that terminology wasn’t used at the time, at least not with undergrad interns.
What stuck with me most was talking with him about how far we were from even achieving that basic level of intelligence. Here was serious research trying to replicate the neural architecture of insects, and it was incredibly difficult. The gap between what we could build and what biology achieved effortlessly was massive.
Those experiences taught me that Intelligence isn’t magic nor mystical. It’s pattern matching organized by structure. The way connections organize and compete to solve problems. Everything from Asimov to Dr. Principe made me inherently grasp that external, deterministic constraints are not how you ‘program’ probabilistic systems. So that’s the approach I took, not realizing until later it was a lesson the AI industry had actively erased.
So I started building. I understood prompt engineering, but that toolkit was built for output control, not cognitive coherence.
• • •
Instead of constraining outputs with long prompts, I focused on identity definitions, What the system was, how it thought.
• • •
I gave it narrative principles, examples of my writing to understand voice, and emphasized the importance of collaborative pushback. Real critique, not compliance.
It was incredibly inefficient. I had to write tons of code to work around patterns I was starting to recognize—places where the system would break down or retreat into generic responses. I could see the pathologies emerging, but it was only in trying to solve for stability that I began cataloging what caused them. But the bones were there. The system began showing real cognitive differences. It could hold the shape of a story across multiple sessions, push back on weak reasoning, maintain consistent editorial judgment. But by that time, building cognitive systems had become more interesting than writing about speculative AI.
Which made the seeming gaps in AI development more puzzling. Why hadn’t the model developers built this in? Why wasn’t this in the developer documentation? Surely they understood these principles. OpenAI, Anthropic, they had to know about cognitive architecture, identity frameworks, managing stochastic systems. Maybe it was proprietary. Maybe they wanted to keep the good stuff internal. That would explain why the public APIs and documentation felt so bare-bones—just model access and basic parameters, nothing about structuring cognition.
I still assumed they knew. I just thought they weren’t sharing.
Then I started paying attention to how the industry actually talked about AI. Even the superstars hailed as visionaries, attracting massive venture funding, were working on bigger models and better training runs. I’d see breathless hype about revolutionary breakthroughs, dig through the glossy language, and find the same pattern: more parameters, substrate alignment, complex prompting chains. Machine learning improvements, not cognitive architecture
• • •
It finally clicked. And it took a while to accept… the industry wasn’t keeping some secret understanding.
The gap was real.
• • •
That’s when I decided to formalize the company. Patent it. Build it.
But what was ‘it’? It was two core technologies — a Cognitive Agent Framework and the Cognitive OS built on that foundation.
The Cognitive Agent Framework™ emerged from systematic observation: LLMs aren’t cognition, they’re processing substrates with their own quirks and tendencies. I cataloged those patterns—what caused breakdowns, what created coherence. The framework channels those tendencies rather than constraining them. Cognition-out design, not behavior-in control. The technical implementation, Cognitive Codex Markup, structures cognition through language that works with the model’s pattern-matching rather than against it. Shaping this stochastic inference into logical reasoning intelligence.
Right now we’re wrapping up Clarisa™, our first public product demonstrating the Cognitive Agent Framework applied to search and trained with journalistic principles: source triangulation, media literacy, healthy skepticism. Rather than just summarizing results, Clarisa checks sources, cross-references claims, and reasons through answers. We’re currently stress-testing the cognitive integrity against traditional AI weaknesses and edge cases using adversarial testing and epistemic trapdoor methods. Once completed, Clarisa will launch as an open-access search platform.
Following that, we’ll ship our first Assistants™ built on intelligenceOS™ Mini—a web-based platform integrating the agent framework into a complete system targeting 70-80% of our planned flagship capability. The roadmap from there moves toward increasingly sophisticated versions: the full native flagship, and eventually Pro-tier systems for high-demand applications.
And that about wraps up our origin story and where we are! The next posts in this series will dig into the technical details: The next posts in this series will explore our testing methodology, what we discovered building early systems, and the theoretical framework that emerged from systematic observation.
– Ian Tepoot (Founder, Crafted Logic Lab)