CEA’s Approach to Moderation and Content Curation

The Centre for Effective Altruism is often in the position of representing effective altruism. For instance, we run EffectiveAltruism.org, the EA Forum, EA Global, and produce our EA Handbook.

We think that we have a duty to be thoughtful about how we approach these materials.

This page shares a little about how we think about this role, and is aimed at engaged members of the effective altruism community who are interested in how we approach cause representation in our materials.

Summary

In general, we are trying to create spaces where people can think carefully, where people can support and encourage each other to take meaningful action on global problems, and where we are cooperative with each other and the world. We think that we can do this without taking an organizational stance on which cause or strategy is most effective. This guides our approach to moderating discussion spaces and deciding who to admit to our events.

When we decide on content splits, we can’t be quite so neutral, but we aim to do the following (roughly in priority order):

  1. Share the arguments for key principles and thinking tools that are core to effective altruism
  2. Give examples of work that people in effective altruism are pursuing
  3. Share the arguments (and counter-arguments) for some of those projects

We hope that this leaves people in a good position to make up their own minds.

When deciding the specific ideas/examples for 2 and 3, we try to pay attention to:

  • Accurately representing the views and actions of people heavily involved in effective altruism (including highly engaged members, cause prioritization experts, people involved in founding effective altruism)
  • Giving enough space to different cause areas to fully explain the core arguments for them
  • Emphasizing that there is ongoing disagreement about which cause areas are most important (we discuss this more below).

Currently this means that of our introductory content, about one third is focused on key principles (like scope sensitivity). Of the remaining cause-area-specific materials, roughly 50% focuses on existential risk reduction (especially AI risk and pandemic risk), 20% on global development, 15% on animal welfare, and 15% on other causes (including broader longtermism). Content for more advanced groups (like EA Global talks) tends to place slightly less weight on existential risk reduction. This % may change over time, as we learn more about how to effectively communicate these ideas, and as the EA community itself changes.

We run some programs and events that are not “effective altruism” branded: in these cases, we don’t aim to follow these guidelines, since we are not claiming to represent effective altruism.

Moderation and curation

In much of our work, we’re focused on creating spaces for people to discuss different strategies for improving the world. For instance, EA Global and the EA Forum are spaces to discuss these questions.

In these spaces, we aim to foster an environment where people are cooperative, thinking carefully, and focused on actually doing good stuff. We think that we can largely do this without making decisions ourselves about which strategies are most effective.

EA Forum

Our staff moderate the EA Forum alongside part-time and volunteer moderators.

We don’t filter or moderate EA Forum posts based on cause prioritization. You can read more about our Forum content policies, including what we encourage and discourage in this guide to Forum norms. You can view a log of Forum moderator comments here.

EA Global (x) admissions

People apply to attend EA Global and EAGx events, and we decide who to admit. You can read more about our admissions process here. We discuss our approach to cause prioritization in these decisions towards the bottom of that page.

Content

Sometimes we have relatively limited space for content: for instance, a number of slots for people to talk at EA Global, or a fixed number of articles we can reasonably fit into our EA Handbook, or a choice about which content to feature on EffectiveAltruism.org.

In cases like this, we need to make a specific decision, and can’t spend equal time/space on all possible worldviews.

If we had in-house researchers, we might be able to rely on their decisions. But we don’t, and so we want to think carefully about other ways of approaching their decisions that lead us to represent EA ideas clearly and fairly.

To that end, we’ve created a set of heuristics that we can apply. This section describes those heuristics and how we’re currently applying them.

General heuristics we use for curating content

When we curate content, we are trying to do the following things, in roughly this priority order:

  1. Share certain core principles or thinking tools (things like “we need to make tradeoffs because we have limited time/money”)
  2. Give concrete examples of what work this might mean we should focus on
  3. Share the arguments for (and against) current key focus areas in effective altruism
Principles first

Our primary goal is to share certain core principles or tools: things like “making tradeoffs”, “seeking the truth”, and “scope sensitivity”.

Why focus on principles?

We focus on promoting principles over conclusions because:

  • We are more confident in the core principles (and some of the secondary principles and thinking tools too) than most or all of the particular projects or activities that people in effective altruism pursue.
  • We think that this is true for many other effective altruists too: these principles are some important common ground.
  • We think that, as we learn more and the world changes, our best guess about which actions are most effective may change. We think that by building a project and community focused on these principles, we will be better placed to update and find more effective ways to help others (compared to if we anchored too much on our current best guesses).
What are these core principles?

We give a provisional list on this page.

Concrete examples

We don’t just want to talk about the principles: we also want to share examples of particular work that people have done motivated by effective altruism.

This is because:

  • As noted in our strategy, we want to make sure that the community is focused on action, rather than simply considering ideas in the abstract: action is necessary for us to have an impact.
  • We think that concrete examples can be a particularly clear and compelling way to explain what effective altruism is focused on, achieving, and trying to achieve.
Sharing arguments for key focus areas

We also wanted to share the arguments for some of the key things that people in effective altruism are working on, and give examples of that work – we think that this is important because it’s a lot of what the community is about, and it also makes the content much more concrete (rather than being really philosophy-heavy, which we think would not give a good sense of what most people in effective altruism work on).

When talking about specific areas, our core goal is to share the arguments for some of the main areas, highlight that there are other areas that one could work on, make clear that there is disagreement in the community about what the right split between areas is, and encourage people to make up their own mind.

How we approach particular types of content

Introductory materials

This section covers introductory materials, like “What is Effective Altruism?”, our Effective Altruism Handbook, and other key pages on effectivealtruism.org, and the advice we give to local effective altruism groups.

When deciding on the split of content for introductory materials, we balance a few different factors:

  • Accurately explaining each area:
    • Most importantly, we want to give a high-quality explanation of each area, so that people would be able to make informed decisions about their personal cause prioritization. (This pushes somewhat to giving more space to harder-to-explain areas like AI relative to bednets.)
  • Accurately representing the views of people who are heavily involved in EA:
    • It’s hard to define what “heavily involved in EA” means, but we place some weight on “EA founders”, highly engaged community members, and cause prioritization experts, and not too much weight on the full sample of people who filled out the EA survey.
    • Our current impression from rough research is that all of these groups currently would on average assign >50% of EA’s future resources to existential risk reduction, though of course there is much disagreement.
    • We think that there are drawbacks to each of these groups (e.g. “cause prioritization experts” may be selected for preferring esoteric conclusions and arguments, highly-engaged community members have been selected to agree with current EA ideas), but they seem to converge to a significant degree.
  • Not implying that people should end up believing one view:
    • We think that the main way we should achieve this is in how we introduce and frame different perspectives: we intend to make clear that there’s ongoing disagreement about which approach is right, and encourage people to make up their own mind.
    • However, if we framed things carefully but 80% of the content were focused on one area, then people might think that they were meant to focus on that area, despite the framing. They might (reasonably) take this as a sign that this is the “preferred” position.
    • Since factors 1 and 2 above push towards a split that is relatively focused on existential risk reduction, the largest risk is that people think that they’re meant to focus on existential risk reduction. We try to mitigate this by shifting content marginally away from existential risk reduction (compared to what points 1 and 2 alone would imply), by having content that is focused on encouraging people to develop their own views, and by providing criticism for each cause area.

We try to not place too much weight on presenting the ideas that we think are most likely to appeal to newcomers. While we try to explain things in an intuitive way and give examples that relate to issues that people are already familiar with, we think it’s important that we represent EA fairly from the start, rather than engaging in “bait and switch” tactics.

Similarly, we try not to place much weight on our staff’s views, or on any particular feedback we’re getting from sections of the community: we think that these views are likely to be less accurate and representative than the figures above.

Currently, for most of our introductory content (like our EA Handbook), about one third of our content is focused on key principles (like scope sensitivity). Of the cause-area-specific materials, roughly 50% focuses on existential risk reduction (especially AI risk and pandemic risk), 15% on animal welfare, and 20% on global development, and 15% on other causes (including broader longtermism).

People will disagree about both the correct split and the correct process for deciding on the split, but ultimately we could only include so much content, and had to make a call.

Conference talks, meetups, etc

The main example here is talks at conferences, including EA Global and EAGx. Most attendees will already be familiar with core ideas, so this group is somewhat more advanced.

When deciding our content split, we focus on accurately representing the views of people who have been heavily involved in EA (factor 2 above), and on making sure not to imply that people should hold a particular view (factor 3 above). This tends to mean that the content is somewhat more focused on global health and wellbeing than introductory content since longtermist content tends to be more boosted by wanting to adequately explain longtermist ideas (which tend to be more complex).

Additionally, since people can choose not to attend much conference content, we tend to push more towards a diversity of content, so that most people can self-select into a stream of content that is interesting to them. This also pushes towards content being more focused on global health and wellbeing (including content on animal welfare) relative to introductory content.

For EA Global and EAGx conferences in 2022, 54% of content was on cross-cause issues (growing effective altruism, cause prioritization, policy, rationality, etc). Of the cause-specific content, roughly 50% focused on existential risk reduction (especially AI risk and pandemic risk), 20% on animal welfare, 20% on global development, and 10% on other causes.

Non-EA-branded spaces

We sometimes hold events or programs that are focused on specific issues or cause areas.

In such cases, we still aim to encourage good epistemics, a focus on actions, and a cooperative attitude. But we don’t try to stick to the above approach to cause prioritization.

Conclusion

Our core focus is on creating a community where people can think clearly, focus on taking actions that improve the world, and where we act cooperatively. We think that such a community has the potential to radically improve the world.

We try to be reflective about what style and distribution of content can best further this goal, but we’ve made mistakes about this split in the past, and we don’t think that we’ve found the perfect balance yet. If you disagree with any of these heuristics, or think that we’ve failed to live up to this document, please let us know via our anonymous feedback form. We will reflect on your input, and may make changes and/or update our mistakes page as a result.