The Council Manifesto
We believe that the future of intelligence is not singular, but plural. Council is the orchestration layer for the multi-model future.
The Single-Model Trap
For the past two years, the world has operated under a fragile assumption: that a single Large Language Model (LLM)—whether from OpenAI, Anthropic, or Google—can serve as an omniscient oracle. This is a fallacy. Every model is a product of its specific training data, its reinforcement learning from human feedback (RLHF), and its corporate safety guardrails.
When you ask GPT-4 for a business strategy, you often receive a "safe," generalized answer biased towards consensus. When you ask Claude, you get a verbose, ethically cautious response. When you ask a localized model, you get niche expertise but poor reasoning. Relying on a single model is akin to having a Board of Directors with only one member. It leads to blind spots, hallucinations, and sycophantic behavior where the AI simply agrees with your premise rather than challenging it.
This "single-threaded" approach to intelligence is insufficient for high-stakes decision-making. You do not need a chatbot; you need a council.
The Council Architecture
Council AI is not an LLM. It is an Integrated Decision Environment (IDE) that sits above the models. It treats GPT-4o, Claude 3.5 Sonnet, Gemini 1.5 Pro, and others not as chatbots, but as raw intelligence process units (IPUs).
Our proprietary orchestration engine assigns specific Personas to these models. We tell GPT-4 it is a "Risk Manager" focused on liability. We tell Claude it is a "Creative Strategist." We tell Perplexity it is a "Fact Checker." Then, we force them to talk to each other.
Federated Consensus Protocol
In our flagship Consensus Mode, the application executes a rigorous three-stage cryptographic-style debate:
- Phase 1: Blind Assessment. Each model analyzes the user's prompt independently. They are prevented from seeing each other's outputs to avoid "groupthink." This ensures we capture the true variance in their reasoning capabilities.
- Phase 2: Cross-Examination. The Council exposes the outputs to the group. The Risk Manager attacks the Strategist's plan. The Fact Checker verifies the claims. This is where hallucinations are caught—if Gemini says X and GPT-4 says Y, the discrepancy is flagged.
- Phase 3: The Synthesis. A specialized "Moderator" agent (typically running on a high-reasoning model) ingests the entire debate transcript. It resolves conflicts, weighs the arguments based on confidence scores, and produces a single, executive-level verdict.
Operational Modes Explained
1. Standard Session
The most direct way to interface with the swarm. This is a parallelized chat interface. When you send a message, it is broadcast to all active agents simultaneously. Their responses stream in real-time, allowing for immediate side-by-side comparison. This is ideal for quick tasks: "Write three different email subject lines," or "Explain this code snippet." You see the raw variance in model personality and capability immediately.
2. Consensus Mode (Deep Reasoning)
As described in our architecture section, this is for high-stakes decision making. Use this when the cost of being wrong is high. M&A decisions, architectural code reviews, legal strategy, and medical triage. The latency is higher because the agents are reading and writing to each other in a loop, but the output quality is significantly higher than any zero-shot prompt.
3. The Stack (Pipeline Automation)
Complex work is rarely a single step. It is a workflow. The Stack allows you to chain models together in a Directed Acyclic Graph (DAG), simplified into a linear UI.
Slot 1 (Researcher): Uses Perplexity to gather citations and recent news.
Slot 2 (Outliner): Uses GPT-4o to structure the research into a narrative arc.
Slot 3 (Writer): Uses Claude 3.5 Sonnet (known for superior prose) to write the actual text based on the outline.
The output of Slot 1 feeds into Slot 2, and Slot 2 into Slot 3. You define the instruction for each step once, and the machine executes the pipeline autonomously.
4. Analysis Mode
Sometimes you don't need a conversation; you need a report. Analysis mode forces the output into a strict JSON schema. It breaks down the prompt into "Strengths," "Weaknesses," and "Key Recommendations." This is rendered not as chat bubbles, but as a structured document dashboard. It is designed for document analysis and contract review.
The Intelligence Roster
We maintain a curated selection of the world's best models. We do not train our own LLMs; we orchestrate the best-in-class foundation models to work in concert.
GPT-4o (OpenAI)
The generalist king. High reasoning capability, excellent instruction following. Used often as the Moderator or Risk Manager.
Claude 3.5 Sonnet (Anthropic)
The writer and coder. Known for having a larger context window and less "AI-sounding" prose. Excellent for the Stack's writing phase.
Gemini 1.5 Pro & 2.5 Flash (Google)
The context heavyweights. Gemini can ingest massive amounts of data (up to 2M tokens). We use Flash for high-speed "Auto" responses.
Perplexity (Sonar)
The researcher. Connected to the live internet. It provides citations and grounds the Council's debate in current reality.
DeepSeek V3
The coding specialist. A highly efficient model that excels at logic puzzles and syntax generation.
Security & Privacy Protocol
Council is designed for professional use. We treat your prompts as ephemeral data.
Guest Users: Sessions are stored in local browser memory only. Once you clear your cache or close the session (depending on browser settings), the data is gone. We do not store guest chats on our servers.
Pro/Org Users: Data is encrypted at rest using AES-256. We do not use your data to train our models. When interacting with third-party providers (OpenAI, Anthropic), we opt-out of data training by default via their API enterprise settings.
A Final Note
"Intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function." — F. Scott Fitzgerald.
Council exists to automate this cognitive dissonance. To force the machine to disagree with itself, so that you may find the truth.