Principles
Core EA principles
One view of the Centre for Effective Altruism’s work is that our primary goal is to share certain core principles or tools scope sensitivity and impartiality.
Why focus on principles?
We focus on promoting principles over conclusions because:
- We are more confident in the core principles (and some of the secondary principles and thinking tools) than most or all of the particular projects or activities that people related to effective altruism pursue.
- We think these principles serve as important common ground for people involved with effective altruism.
- We think that, as we learn more and the world changes, our best guess about which actions are most effective may change. By building a project and community focused on these principles, we will be better placed to update our views and find more effective ways to help others (compared to anchoring too much on our current best guesses).
What are these core principles?
Provisionally, we think that the most core, central principles of effective altruism are:
- Scope sensitivity: We’re committed to prioritizing actions that benefit more lives over actions that benefit a few—that saving a billion lives is more important than saving ten.
- Impartiality: With the resources we choose to devote to altruism, we strive to help those that need it the most without giving more weight to those who are similar to us or live near us in space or time. (Less confidently, this often means focusing on people in developing countries, animals, and future generations.)
- Scout mindset: We believe that we can better help others if we’re working together to think clearly and orient towards finding the truth, rather than trying to defend our own ideas. Humans naturally aren’t great at this (aside from wanting to defend our own ideas, we have a host of other biases), but since we want to really understand the world, we aim to seek the truth and try to become clearer thinkers.
- Recognition of tradeoffs: Because we have limited time and money, we recognize the need to prioritize between different ways to improve the world.
There are other principles and tools that we are somewhat less confident in, but still seem likely to be true and important to how the effective altruism community operates:
- Expected value: We don’t intrinsically care about being certain we’re having some impact; saving 100 lives with 10% probability is better than saving 5 lives for sure, because in the first case we save 100 * 10% = 10 lives in expectation. (However, putting this principle into practice when exact probabilities and values aren’t handed to us is challenging and different people can have very different practical strategies.)
- Thinking on the margin: If we're donating $1, we aim to give that extra $1 to the intervention that will make the best use of an extra $1—which might not be the most cost-effective intervention on average. This is important because most interventions have diminishing returns.
- Consequentialism: When thinking about how to help others, we are principally concerned with the outcomes of our interventions. (Though, importantly, we are not exclusively concerned with outcomes and want to avoid “ends justify the means” type reasoning.
- The importance (and difficulty) of considering unusual ideas: Society’s consensus has been wrong about many things over history (e.g. the sun circling the Earth, the morality of slavery). To avoid making similar mistakes, we strive to be open to unusual ideas and moral positions, while still thinking critically about the issues and acting cooperatively with others.
- The importance, neglectedness, tractability framework: When thinking about what causes to work in or donate to, we look for problems that affect a large number of beings (importance), with relatively few people working on them already (neglectedness), where we see tangible paths to making further progress (tractability). Often these pull in opposite directions—important problems are often already crowded, and highly neglected problems are often hard to get traction on—so we need to weigh these against one another.
- Crucial considerations: Sometimes missing one “crucial consideration” can cause an intervention that looked good to seem harmful, or vice versa. It is extremely hard to figure out whether some action is overall helpful or harmful, particularly if you’re trying to influence complex social systems or the long-term. This is part of why it can make sense to do a lot of analysis of interventions you’re considering.
- Forecasting: Predicting the future is hard, but it can be worth doing in order to make our predictions more explicit and learn from our mistakes.
- Fermi estimates: When making a decision (e.g. about which organization to fund), we often find it useful to make a rough calculation for which option has the highest expected value. Even if there’s a lot of uncertainty, this can give a rough answer, and can tell us which considerations are most important to investigate next.
All of these lists are provisional and rely on some judgement calls.
Beyond this, there are substantial disagreements between people who are involved in effective altruism. These disagreements can be driven by different values and different methodological assumptions, and can be quite significant.