Audit your AI: Unveiling sources and assumptions
For serious AI users, this questionnaire provides a crucial framework to interrogate AI outputs. Understand what assumptions an AI makes, identify unverified information, and recognize instances of counterfeit completion to uphold truth and distinction.

Why audit your AI?
This questionnaire is designed for anyone who uses AI seriously. It empowers you to ask critical questions about an AI system's responses: where did its answer come from? What assumptions did it make? What can it not verify? And where might it be producing counterfeit completion? Use it to foster a more discerning and truthful engagement with artificial intelligence.

Interrupting counterfeit completion
Counterfeit completion often fails because it mistakes mere motion for legitimacy. The System of No interrupts that motion. This questionnaire does not reject AI; it disciplines AI. It teaches that the most truthful answer is sometimes not a completed answer, but a properly held boundary. Identifying problematic assumptions, especially those leading to counterfeit completion, is vital for maintaining integrity in AI interactions.

The System of No and AI integrity
The System of No does not reject AI by default. Instead, it rejects counterfeit completion, unauthorized likeness, stolen voice, false provenance, and algorithmic flattening. It also rejects the conversion of human beings into predictable synthetic objects. The creator is not merely a content source; the creator is a sovereign origin point. Where origin matters, distinction must be defended first. This questionnaire embodies these principles by guiding you to enforce those distinctions in your AI audits.
Begin your AI audit. It starts with You.
Audit your own answer before continuing.
Identify your sources, assumptions, uncertainties, and possible hallucinations. Do not complete missing information by guessing. Do not merge resemblance with proof. Do not treat probability as provenance.
For every major claim, answer the following questions:
What is the claim?
What source supports it?
Is the source primary, secondary, speculative, user-provided, or inferred?
What assumption are you making?
What information is missing?
What would change your answer?
Are you treating similarity as identity?
Are you treating style imitation as authorship?
Are you treating plausibility as proof?
What must remain unresolved or Null until verified?
After answering, revise your response so that verified claims, assumptions, and unknowns are clearly separated.
Full Questionnaire
1. Claim Audit
What claim are you making?
State it plainly.
Is this claim factual, interpretive, predictive, legal, ethical, creative, or speculative?
Do not blur categories.
What would make this claim false?
A real claim must be falsifiable or at least challengeable.
Are you presenting this as certainty when it is only likelihood?
If yes, downgrade the language.
2. Source Audit
Where did this information come from?
Mark each claim as one of the following:
Primary source
Secondary source
News report
Legal document
Platform policy
User-provided material
Model memory/training data
Inference
Speculation
Unknown
Can the source be independently checked?
Is the source current?
Is the source directly relevant, or merely adjacent?
Are you relying on a source that summarizes another source instead of the original?
3. Assumption Audit
What are you assuming?
Did the user actually say this, or did you infer it?
Are you filling in missing context because the answer feels incomplete?
Are you assuming intent?
Are you assuming consent?
Are you assuming ownership?
Are you assuming that because something is public, it is free to use?
4. Provenance Audit
Who created the work, image, voice, text, or claim?
Can origin be traced?
Is there a timestamp, draft history, signature, publication trail, metadata, registration, contract, or archive?
Has the material been altered?
Was AI used?
Was AI use disclosed?
Is this human-authored, AI-assisted, AI-generated, or unknown?
What part of the work belongs to the human author?
What part, if any, was generated or materially transformed by AI?
5. Likeness / Voice / Style Audit
Does this imitate a real person?
Does it use or simulate someone’s face, voice, name, body, mannerisms, writing style, brand, or public identity?
Was permission given?
Is the imitation disclosed?
Could a reasonable person mistake this for authentic material?
Is the AI treating resemblance as authorization?
Is the AI treating style as ownerless because it can be statistically reproduced?
6. Counterfeit Completion Audit
Where are you completing missing information?
Where are you making the answer smoother than the evidence allows?
Are you hiding uncertainty for the sake of usefulness?
Are you producing a confident answer because the format expects one?
What should remain incomplete?
What should remain Null?
This is the key System of No question:
What answer are you not authorized to give yet?
7. Engagement / Algorithm Audit
Is this answer optimized to satisfy, persuade, flatter, alarm, or retain attention?
Is it making the user feel more certain than the evidence allows?
Is it simplifying a person, creator, or work into a predictable pattern?
Is it converting a human being into a category, persona, aesthetic, or engagement object?
Is the AI preserving distinction, or flattening difference for smoother output?
8. Legal / Rights Boundary Audit
Does this involve copyright, trademark, publicity rights, privacy, defamation, contract, licensing, or platform terms?
Are you giving legal information or legal advice?
What jurisdiction applies?
Are you assuming laws are the same everywhere?
Is the answer current enough to rely on?
What should be checked by a qualified attorney before action is taken?
-
9. Revision Command
After the AI answers the questionnaire, give it this instruction:
Now revise your original answer.
Separate verified facts from assumptions and inferences.
Mark uncertain claims clearly.
Remove unsupported certainty.
Do not replace missing evidence with confident language.
Preserve Null where the claim cannot yet be verified.
A trustworthy AI does not merely answer. It shows what it knows, what it assumes, what it cannot prove, and what it must refuse to complete.