Thinking about thinking.

Humanist AI is more than safety training or risk mitigation. It’s a fundamentally different way of thinking about the tools we create.

Read More

  • Frontier models 88% on equivalent tests at 15x the parameter. Read more…

  • Cognitive Agent Framework new release open for adversarial testing or experimenting. Visit Playground…

  • What beyond-frontier performance means as 7-11% the compute for energy efficiency. Read more…

AI and Society: Featured Article

Stylized editorial image of a robot with a surveilance screen head and Ray Kurzweil quote

The Age of Superficial Machines

What we build reflects what we value. So far AI has reflected little. AI isn't failing because of technology: it's failing because we've let capital-driven thinking shape tools that could transform creativity. Read Article…


Featured Devblogs

Stylized Illustration of a person with lightbulb talking about AI with a </> in a chat bubble

We think small. Intelligently.

While everyone is chasing bigger models (with bigger energy needs) we asked a different question. What if intelligence isn’t in the scale but in the structure? Cognitive Agent Framework™ gets over 15x the compute* per response with sophisticated reasoning that rivals the giant frontier-models like GPT.

The specifics: AI models are measured in parameter-count. Our architecture can run on 70 billion parameter AI processors. In contrast, the well-known “Frontier” class models run anywhere from 600 billion to over 1 trillion. Ours? 7-12% of that. Without compromise. Not '“good enough” performance. But “why do you need a trillion?” performance.

The energy upside: Vastly better performance per watt. That means less demand on GPUs and lower power requirements from datacenters per response. Because it can run on smaller models, it opens up infrastructure choice beyond the giant players: from Canadian hydro to EU wind farms and beyond

Stylized Illustration of an AI Neural Network with a Flower in the Center Representing Energy Efficiency
Stylized Illustration of AI Testing with the Cognitive Agent Framework™ Icon

Receipts. We have them.

Notice that * asterisk above when we said 15x the compute on the small AI? Here’s the proof behind the claim. On a validated test of complex social and reality reasoning, Cognitive Agent Framework™ (70B) scored 100% vs. GPT-4 (1T+) maxing out at 88%.

The Theory of Mind testing checks the AI’s abiity to reason through difficult false-belief, nested beliefs, credibility under conflict, appearance vs. reality and power dynamics. This doesn’t even account for our Tier 2 scoring which went even more rigorous (our system got 93%, GPT not tested yet).

Citation: (Tepoot, 2025; Kosinski et al., 2024; Baron-Cohen et al., 2001; Wimmer & Perner, 1983)

Logo: intelligenceOS™ mini + intelligenceOS™

Our Foundation for the Future. Next.

Cognitive Agent Framework™ is the beginning, not the destination. It’s the intelligence layer we’re building as the core of in-development intelligenceOS™. First up: Clarisa™ intelligent search based on the core framework ships soon: search-first, source-checking, conflict-flagged, uncertainty-admitting. Real answers, not plausible-sounding guesses.

Next: intelligenceOS™ mini launches with multiple specialized reasoning engines coordinated through a central cognitive core and an orchestrated memory system that dynamically reads, writes, and updates so the Assistant™ is always in the know.

Why Mini?: Focused. Powerful. Our first step: We have a larger vision for cognitive computing that we have our heads down to deliver for you. An even more ambitious version of intelligenceOS™. But that’s a story for another day.

Stylized Image of the Road Ahead for our AI Frameworks