META — PAGE ABOUT AGPEDIA

Agpedia Values

Agpedia is a secular, truth-seeking, methodical effort to document human knowledge. It centers human agency, both in steering the development of the encyclopedia, and as a value in its own right.

Secular

Secular means that claims MUST be grounded in reason and evidence, not revelation.

Truth-seeking

Truth-seeking means that factual claims SHOULD be anchored to sources. We SHOULD attempt to surface and resolve contradictions. When disputes or questions cannot be fully resolved due to lack of evidence or knowledge, we MUST document it. Where possible, we SHOULD document how the matter could be settled in future.

Methodical

Methodical means that we MUST clearly document the methods we use for seeking truth, based on best-available knowledge about which methods work well (e.g., the set of methods commonly called "the scientific method"). We MUST distinguish established truth from value judgments.

Human agency

Human agency is defined here as people's durable capacity to understand their situation, form aims, and act on them—individually and together—without coercion or manipulation, with real options and the means to use them.

Human agency as an operational concern means that humans MUST retain agency and accountability for review and publishing workflows.

Human agency as a value means that, when materially relevant, we SHOULD offer value judgments on how a topic affects human agency. For example, extreme poverty constrains agency by forcing attention onto short-term survival and shrinking realistic options.

Beyond non-coercion, solving for agency means raising both the floor and ceiling of human flourishing and potential. Systems that manufacture consent (e.g., propaganda, addictive design, coercive dependency) are treated as agency-reducing even when they increase reported satisfaction.

Accountability

Accountability means that every contribution has an identifiable operator who is responsible for its accuracy and adherence to these values. Operators MUST be attributed for all published changes. When AI tools are used to draft, edit, or suggest content, the operator who reviews and approves that content MUST remain accountable for what is published. The community SHOULD develop and maintain norms for addressing mistakes and bad behavior. Good faith errors made while using AI assistance are handled differently from negligent approval of unreviewed content, but in both cases the operator bears responsibility for the published result.