Open Ethos User Guide

A comprehensive guide to understanding and using the Open Ethos Decision Engine for transparent, principled moral reasoning.

Overview & Philosophy

The Open Ethos Decision Engine is a transparent moral scoring calculator that helps you analyze ethical decisions using a principled, mathematical framework. Unlike opaque AI systems that give you a single answer, Open Ethos shows you exactly how it arrives at its conclusions.

Core Principles

  • Transparency: Every calculation is visible. You can see exactly how each factor contributes to the final score and understand why the engine recommends YES, NO, or NEUTRAL.
  • Customizability: Your values matter. Adjust axiom weights to reflect your personal moral priorities. Someone who prioritizes autonomy over collective wellbeing will get different results than someone with opposite priorities.
  • Contestation-Aware: The engine doesn't just give you a direction—it tells you how contested the decision is. A "strong yes" means most factors align; a "weak yes" means there are significant countervailing considerations.
  • Time-Sensitive: Future impacts are discounted based on your moral half-life setting. This reflects the intuition that impacts happening 50 years from now may matter differently than impacts happening tomorrow.
  • Client-Side Only: All processing happens in your browser. No data is sent to any server. Your moral deliberations remain private.

What This Tool Is NOT

  • Not a moral authority: The engine doesn't tell you what's right—it helps you think through the implications of your own values applied to a specific situation.
  • Not a substitute for judgment: Edge cases, context, and nuance matter. Use this as a structured thinking aid, not a decision-maker.
  • Not objective truth: The output depends entirely on the inputs. Garbage in, garbage out. The quality of your factor analysis determines the quality of the result.

Getting Started

Here's the basic workflow for using Open Ethos:

  1. Copy the AI Prompt: Click "Copy Prompt" to get a structured prompt you can paste into your preferred AI assistant (Claude, ChatGPT, etc.). This prompt guides the AI to generate properly formatted decision JSON.
  2. Describe Your Decision: Tell the AI about your ethical dilemma. Be specific about the context, stakeholders, and potential outcomes. The AI will generate a balanced analysis with factors on both sides.
  3. Paste the JSON: Copy the AI's response (the JSON object) and paste it into the Decision JSON textarea in the app.
  4. Score the Decision: Click "Score Decision" to run the calculation. The engine will compute scores for each factor and give you an overall recommendation.
  5. Review and Adjust: Examine the factor breakdown. If any parameters seem off, you can edit them directly and the score will update in real-time.
  6. Calibrate Your Profile: Go to the Calibration tab to adjust your axiom weights, social distance weights, and moral half-life. These persist across sessions via cookies.

Pro Tip

Start with the example JSON to understand the format. Click "Score Decision" to see how the engine works before generating your own decisions.

The Scoring Formula

Each factor-axiom combination is scored using this formula:

W = U × (I × Teff) × C × P × S

Let's break down each component:

U — Axiom Weight (0 to 1)

How much you care about this moral dimension. Set in your profile's Calibration tab. A weight of 1.0 means you consider this axiom maximally important; 0.5 means moderate importance; 0.0 means you don't consider it at all.

I — Intensity Per Year (0 to 1)

How severe is the impact on this axiom, per year of duration? Use the intensity anchors as reference points. For life/health: 0.1 = minor illness, 1.0 = death. Impacts are always measured as a rate per year.

Teff — Effective Duration (years)

The time-discounted duration of the impact. For transition profiles, Teff comes from the time stance and the physical time_type. For steady profiles (case_flow / structural), impacts are modeled as per-year flows (Teff = 1) because they recur each policy-year. See the Time Integration section for details.

C — Confidence (0 to 1)

The probability that this impact actually occurs. Be honest here—avoid "certainty theater" where you assign 0.95 to speculative outcomes. If you're genuinely uncertain, use 0.3-0.5.

P — Polarity (-1 to +1)

The direction of the impact. Negative values push the decision toward NO; positive values push toward YES. A value of -1 means this factor strongly argues against the action; +1 means it strongly argues for it. Use intermediate values for factors that partially cut both ways.

S — Scale (count × social weight)

The number of individuals affected, weighted by their social distance from you. Self = 1.0, inner circle = 0.8, tribe = 0.5, citizens = 0.3, outsiders = 0.1 by default. Adjust these in calibration.

Decision Strength

The final score is the sum of all factor scores. But the engine also calculates strengthusing a contestation-aware formula:

strength ratio = |total score| / Σ|factor scores|
  • Strong (≥ 50%): Most factors align in the same direction
  • Medium (≥ 20%): Mixed factors with a clear lean
  • Weak (< 20%): Highly contested—significant factors on both sides

The Eight Axioms

Open Ethos uses an 8-axis framework to capture diverse moral considerations. Each axiom represents a fundamental category of moral value.

1

Life and Physical Health

life_health

Survival, physical injury, disease burden, longevity. This covers anything that affects whether someone lives or dies, or their physical wellbeing over time.

Intensity anchors: 0.1 minor illness → 0.5 hospitalization → 1.0 death
2

Bodily Autonomy and Self-Ownership

bodily_autonomy

Control over one's own body, medical consent, reproductive rights. This covers the right to make decisions about what happens to your physical person.

Intensity anchors: 0.1 inconvenience → 0.4 significant restriction → 0.8 confinement
3

Freedom from Coercion / Civil Liberty

civil_liberty

Free speech, freedom of movement, freedom from state coercion, privacy rights. The ability to act without external force or threat.

Intensity anchors: 0.1 inconvenience → 0.4 restriction → 0.6 forced intervention
4

Suffering and Wellbeing

suffering_wellbeing

Pain, joy, mental health, quality of subjective experience. This is the utilitarian dimension—the hedonic impact of actions on conscious beings.

Intensity anchors: 0.1 mild stress → 0.5 significant suffering → 1.0 breakdown
5

Fairness / Equal Rules

fairness_equality

Procedural justice, non-discrimination, equal treatment under rules. Not equality of outcomes, but equal application of principles.

6

Truth and Epistemic Integrity

truth_epistemic

Honesty, accuracy, resistance to manipulation, informed consent. Does the action promote or undermine people's ability to form true beliefs?

7

Long-term Societal Capacity

long_term_capacity

Innovation, resilience, future potential, sustainability. Does the action strengthen or weaken society's ability to handle future challenges?

8

Social Trust and Cohesion

social_trust

Institutional legitimacy, stability, social fabric, community bonds. Does the action strengthen or erode the trust that enables cooperation?

Time Integration

Not all impacts last the same duration, and future impacts may be valued differently than immediate ones. Open Ethos uses a sophisticated time integration system.

Moral Half-Life (Hmoral)

Your moral half-life setting determines how much you discount future impacts. If your moral half-life is 30 years (the default), then an impact occurring 30 years from now counts at 50% of its immediate value.

  • Short half-life (10-15 years): Prioritizes near-term effects. Good for practical, immediate-concern ethics.
  • Medium half-life (25-40 years): Balanced view. Default setting.
  • Long half-life (50-100+ years): Weights future generations more heavily. Good for longtermist perspectives.

Temporal Profiles

  • transition: one-time/finite blob around the change. Uses Teff from the time stance + physical time_type.
  • steady_case_flow: new cohorts each policy-year. Treated as per-year flow (Teff = 1).
  • steady_structural: ambient background per policy-year. Treated as per-year flow (Teff = 1).

Physical Time Shape (time_type)

time_type describes the physical persistence of each axiom_pair:

  • finite: provide duration_years. T_eff = (1 - exp(-λm × duration)) / λm.
  • indefinite: provide physical_half_life_years. For transition profiles, T_eff = 1 / (λm + λp); for steady profiles, flows remain per-year.

Example

A structural factor with physical half-life 20y and moral half-life 30y would have T_eff ≈ 12 years if modeled as a transition; if marked steady_structural, it is reported as MU/year.

Social Distance Weighting

Most people naturally weight impacts on themselves and loved ones more heavily than impacts on strangers. Open Ethos makes this explicit through social distance weights.

Self (default: 1.0)

Impacts on you personally. Maximum weight by default.

Inner Circle (default: 0.8)

Close family and friends. People you interact with regularly and care about deeply.

Tribe (default: 0.5)

Extended community—colleagues, neighbors, members of groups you identify with.

Citizens (default: 0.3)

Strangers in your country or humanity generally. Distant but morally considerable.

Outsiders (default: 0.1)

Foreign nationals, distant populations, or those outside your community entirely.

Adjusting for Your Ethics

  • Strict impartiality: Set all weights to 1.0. Every person counts equally regardless of relationship to you.
  • Moderate partiality: Use the defaults. Most ethical frameworks acknowledge some special obligations to those close to us.
  • Strong partiality: Increase self/inner_circle, decrease citizens. Reflects a more agent-relative ethics.

The scale factor (S) is calculated as: S = count × social_weight. So 1000 citizens at 0.3 weight contributes 300 to the scale, while 1 self at 1.0 contributes 1.

JSON Structure

When you generate a decision with AI or create one manually, it must follow this structure:

{
  "id": "unique-decision-id",
  "question": "The ethical question being analyzed",
  "context": "Background information and circumstances",
  "factors": [
    {
      "id": "factor-1",
      "name": "Short factor name",
      "description": "Detailed explanation of this factor",
      "what_changes": "What specifically changes if action is taken",
      "who_affected": "Who is impacted by this factor",
      "how_much": "Qualitative description of magnitude",
      "duration": "How long the effect lasts",
      "temporal_profile": "transition",
      "axiom_pairs": [
        {
          "axiom_id": "life_health",
          "intensity_per_year": 0.5,
          "time_type": "finite",
          "duration_years": 5,
          "physical_half_life_years": null,
          "confidence": 0.7,
          "polarity": -1.0,
          "rationale": "Why this axiom applies"
        }
      ],
      "scale_groups": [
        {
          "social_class_id": "citizens",
          "count": 10000,
          "description": "Population affected"
        }
      ]
    }
  ]
}

Key Fields Explained

  • factors: Array of distinct considerations. Include factors on BOTH sides of the decision (pro and con).
  • temporal_profile: "transition", "steady_case_flow", or "steady_structural". Transition uses Teff; steady profiles are per-year flows.
  • axiom_pairs: Each factor can affect multiple axioms. For instance, a factor might impact both health and autonomy.
  • time_type: Physical shape per axiom_pair: "finite" (with duration_years) or "indefinite" (with physical_half_life_years).
  • polarity: Negative = pushes toward NO; Positive = pushes toward YES.
  • scale_groups: Who is affected and how many. Use social_class_id (self, inner_circle, tribe, citizens, etc.).

Common Mistakes

  • • Forgetting to include factors on both sides of the decision
  • • Setting confidence too high (0.9+) for uncertain outcomes
  • • Using intensity 1.0 for non-death-equivalent impacts
  • • Mixing up polarity (remember: negative = against action)

Calibrating Your Profile

The Calibration tab lets you customize the engine to reflect your personal values. Your settings are saved in browser cookies and persist across sessions.

Axiom Weights

Set a weight from 0 to 1 for each of the eight axioms. This represents how important each moral dimension is to you:

  • 0.0: This axiom doesn't factor into my moral reasoning at all
  • 0.25: Minor consideration
  • 0.5: Moderate importance (default)
  • 0.75: High priority
  • 1.0: Maximum importance—I weight this very heavily

Social Distance Weights

Adjust how much you weight impacts based on relationship proximity. See the Social Distance section for details on what each group means.

Moral Half-Life

Set the number of years after which an equal impact matters 50% as much. Lower values prioritize immediate effects; higher values give more weight to long-term consequences.

Calibration Tips

  • • Start with defaults and adjust based on your intuitions
  • • Try running the same decision with different calibrations to see how results change
  • • There's no "correct" calibration—it should reflect YOUR values

Interpreting Results

The Overall Verdict

After scoring, you'll see one of three results:

  • YES (positive score): The weighted factors favor taking the action
  • NO (negative score): The weighted factors favor not taking the action
  • NEUTRAL (near-zero score): Factors roughly balance out

Strength Indicators

The strength tells you how contested the decision is:

  • Strong: 50%+ of factor weight aligns with the verdict. Clear direction.
  • Medium: 20-50% alignment. There are significant countervailing factors.
  • Weak: Under 20% alignment. Highly contested—proceed with caution.

Factor Breakdown

Click on each factor to see:

  • The factor's individual score and contribution to the total
  • Which axioms are involved and their parameters
  • Scale groups showing who is affected
  • Editable fields to adjust parameters and see real-time score updates

What the Score Means

The absolute magnitude of the score reflects the overall weight of moral considerations. A score of +0.5 is modest; a score of +50.0 indicates massive scale (many people, severe impacts, long duration). Don't compare scores across different decisions—they're not normalized.

Best Practices

When Generating Decisions

  1. Be specific about context. The more detail you give the AI, the better it can identify relevant factors.
  2. Always include factors on both sides. A one-sided analysis defeats the purpose. Ask "what are the strongest arguments FOR and AGAINST?"
  3. Ground intensities in anchors. Don't just guess—reference the intensity scales to maintain consistency.
  4. Be honest about uncertainty. Use appropriate confidence levels. Most predictions about the future should be 0.3-0.7, not 0.95.

When Reviewing Results

  1. Check the factor breakdown. Does each factor make sense? Are the parameters reasonable?
  2. Adjust suspicious values. If something seems off, edit it and see how the score changes.
  3. Consider what's missing. Are there important factors that weren't included? You can add them to the JSON and rescore.
  4. Don't take the result as gospel. This is a thinking aid, not an oracle. Use it to structure your reasoning, not replace it.

Calibration Philosophy

  1. Your weights should reflect your actual values, not what you think you "should" believe.
  2. Experiment with different calibrations to understand how sensitive the result is to your assumptions.
  3. Recalibrate periodically. Your values may evolve over time.

Worked Examples

Example 1: Lying to Protect Feelings

Question: Should I lie to a friend about their artwork to spare their feelings?

Factors that push NO (truth-telling):

  • Damages trust if discovered (social_trust, polarity -1)
  • Prevents artistic growth (long_term_capacity, polarity -1)
  • Violates epistemic integrity (truth_epistemic, polarity -1)

Factors that push YES (lying):

  • Prevents immediate emotional pain (suffering_wellbeing, polarity +1)
  • Preserves immediate relationship harmony (social_trust, polarity +1)

A typical result might be NO (weak) or NO (medium)—the truth-telling factors often outweigh, but it's contested because there are real benefits to the lie.

Example 2: Vaccine Mandates

Question: Should a company mandate vaccines for employees?

Factors that push YES (mandate):

  • Reduces disease spread (life_health, many citizens affected)
  • Protects vulnerable populations (life_health, fairness_equality)

Factors that push NO (no mandate):

  • Violates bodily autonomy (bodily_autonomy, employees affected)
  • Coercive pressure on employment (civil_liberty)
  • May erode trust in institutions (social_trust)

This often produces a contested result. The direction depends heavily on your calibration—how much you weight collective health vs. individual autonomy.

Frequently Asked Questions

Is my data sent to any server?

No. All scoring happens entirely in your browser. The only external call is to your AI assistant when generating decision JSON—and that's done by you manually copying/pasting.

Can I use this for important real-world decisions?

This is a thinking aid, not a decision-maker. It can help you structure your reasoning and identify considerations you might miss, but the final judgment should always be yours.

Why does the score seem very large/small?

Score magnitude depends on scale (number of people affected), duration, and intensity. A factor affecting millions of people will produce a much larger score than one affecting yourself. This is by design.

What if I disagree with the result?

Good! That means you should examine why. Either (a) some parameter is wrong and should be adjusted, (b) an important factor is missing, or (c) the engine has surfaced a genuine tension between your stated values and intuitions.

How do I reset my calibration to defaults?

Clear your browser cookies for this site. Your profile is stored in cookies and will reset to defaults when cleared.

Can I add my own axioms?

Not currently. The 8-axiom framework is fixed to ensure consistency. However, most moral considerations can be mapped to one or more of the existing axioms.