CEA's 2018 strategy

This page was last updated in May 2018, and does not represent our most recent thinking. For our current strategy, please see our strategy page.

In this article we discuss some of the shared assumptions that CEA makes as an organization to allow us to make plans and act together. This doesn't mean that all CEA staff necessarily believe all of these things.

Uncertainty and change

It is important to highlight that a key part of our thinking is great uncertainty - uncertainty about which moral theory is correct, about what problems we should work on, and about how to build a good community. What follows are our collective current best guesses. Although we will always know only a fraction of what we need to, we expect and hope that much of what follows will change as we learn more, and as circumstances change.

Vision

Our vision is an optimal world. This might seem nonspecific, but that's by design. It's nonspecific because we don't yet know what an optimal world will look like. We think there are lots of different factors to making the future as good as possible, many of which humanity probably doesn't understand. However, we think it’s good to keep on aiming to make the world the best that we can.

Therefore, our goal is to help put humanity in a good place to solve our collective problems. This means supporting careful thinking about our collective values, and about the social and technological means to implementing those values. And it means supporting the cooperation and stability that will allow this thinking time to bear fruit. We intend to keep working towards things that seem robustly good for humanity while we continue to learn more.

How we can work towards our vision

This section explains how we think CEA can best attain its vision. This thinking is the reason for our mission, which is to create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible.

Here are some of the assumptions that underlie our priorities and thinking:

The long-term value hypothesis

We believe that the most effective opportunities to do good are aimed at helping the long-term future.

We believe this because we think that there are potentially many more beings in the future than are currently alive, and that these future beings are morally valuable. This means that most moral value, and so most of the impact of our actions, lies in the future. We think that helping the long-term future is extremely difficult, but achievable.

However, we recognize that this argument rests on some significant moral and empirical assumptions, and so we remain uncertain about how valuable the long-term future is relative to other problems. We think that there are other important cause areas, particularly animal welfare and effective global health interventions.

For more explanation, see the cause profile on the long-term future, 80,000 Hours’ writeup, Nick Beckstead’s dissertation, and this podcast with Toby Ord.

Improving the world’s long term trajectory

We believe that the best way to help the long-term future is through improving the world’s development trajectory, so that the future becomes permanently better than it otherwise would have been. This is so even if an improved trajectory happens at the expense of a lower speed of development, so that we reach a stage where the world is significantly better than it otherwise might have been but we reach that stage more slowly. This is because we believe that the value derived from an improved trajectory dwarfs that derived from greater speed.

A particularly salient way of improving the world’s development trajectory is to reduce existential risks. Existential risks are made up of extinction risks and non-extinction risks,  which Nick Bostrom defines as follows:

  1. Extinction risks: Risks which threaten to cause human extinction.
  2. Non-extinction risks: Risks which threaten to permanently and drastically lower the value of the future.

Besides reducing existential risk, we also believe, with Nick Beckstead, that it is important to increase the probability that the very best possible future comes to be.

We believe that it is particularly important to increase the chance of a great future. We believe that the value of such a future could be very high, and that we have a non-trivial chance of making it more likely to come to be.

In line with that, CEA assigns special importance to reducing existential risk.fn-1 However, we also believe that there may be other important ways to improve the world’s development trajectory (see Nick Beckstead). They include increasing the probability of realizing the best possible future (as opposed to a future which, while good, is not as good as it could be).

Another important belief of ours is that interventions explicitly targeted at the long-term future, such as technical AI safety work, have a greater expected impact on the long-term future than interventions targeting the near-term future. While we think that there are positive long-term indirect effects from, e.g., interventions to reduce global poverty or promote animal welfare, we think that those effects are gradually reduced with time and hence won’t have a large permanent effect on the world (see Paul Christiano).

More on the importance of existential risk can be found in Nick Bostrom’s papers Astronomical Waste and Existential Risk as a Global Priority, in Nick Beckstead’s dissertation, and in this LessWrong post by Beckstead (based on the dissertation). Paul Christiano discusses the long-term effects of global poverty interventions here.

The portfolio approach to existential risk reduction

We believe that to reduce existential risks, we need a portfolio of different approaches, which requires a community of people working on these issues.

Since there are many different approaches to improving the world’s long term trajectory, and since work on each of these approaches is likely to bring some diminishing returns, it makes sense for resources to be split between different problems and approaches. We remain uncertain about the relative merits of these different approaches.

However, CEA believes that AI risk is probably the most significant of the known existential risks. We also believe that AI could be a solution to the other existential risks, and that AI, if harnessed, can help us to a great future.

There is expert consensus that we are very uncertain about when artificial general intelligence (AGI) will arrive (how long the “timelines” will be). Based on Owen Cotton-Barratt’s research, CEA believes that given this uncertainty, it is prudent to devote resources both to projects which will have an impact under shorter timeline scenarios, and projects whose impact is greater under longer timeline scenarios. Different kinds of resources will be useful for each scenario.

  1. If the timelines are very short (in particular if they are less than ten years), the most useful resources will be highly talented AI strategists, policy-makers, and technical safety researchers, as well as connections with key players.
  2. If timelines are longer, other kinds of resources, besides those immediately useful to mitigate AI risk, become increasingly useful. In particular, it becomes more important to build a community of talented and value-aligned people, who are willing to flexibly shift priorities to the most high value causes. In other words, growing and shaping the effective altruism community into its best possible version is especially useful under longer timelines.

Owen’s talk describes what actions to take under different timeline scenarios in much greater detail. The most widely read source on AI risk is Nick Bostrom’s Superintelligence. A TED Talk can be found here, and a write-up by Tim Urban can be found here. An article on AI as a positive and negative factor for existential risk by Eliezer Yudkowsky can be found here.

Community building

We believe that, of the things that CEA could do, we are best placed to support and develop the effective altruism community.

CEA does some work geared towards short timelines, including individual outreach to talented existential risk researchers. If we were to significantly update towards timelines shorter than 10 years (which we currently think are unlikely), we would shift somewhat more of our focus towards those activities.

However, even in that scenario, CEA would not shift to focus exclusively on short timelines. It seems to us that CEA’s comparative advantage is to work on scenarios where AI is developed more slowly. This is because if timelines are long, EA community building is more important. CEA is particularly well placed to develop the EA community, as one of the organizations that fostered the EA community, and the owner of several key EA brands (such EA Global, EffectiveAltruism.org, and EA Funds).

We want to build a community that is not exclusively or perpetually focused on the problems that we currently think are most important. Partly this is because we are unsure about some of the above, and want to allow room in the community for people to seek projects that are even more important. It is also because much of our impact may come in the medium to long-term when we might have more information about what is important, or face a different set of global problems. So we want to build a community that is flexible: responsive to new information, problems, and circumstances.

Mission

The assumptions listed above justify our mission: to create a global community of people who have made helping others a core part of their lives, and who use evidence and reason to figure out how to do so as effectively as possible.

How can we complete our mission?

Community-building hypotheses

This section discusses some important background hypotheses about how to build a valuable community.

Money, Talent, Ideas, Coordination

We believe that it is useful to think of the impact of the community as a product of four complementary resources: money, talent, ideas, and coordination.

One way to justify this is to consider the key resources that an organization working on an important problem needs: clearly, it needs staff (talent), and money to pay and resource its staff. Less obviously, the organization needs a set of ideas and knowledge about the problem they’re working on. The organization also needs coordination: internally to make sure people are working together efficiently, and externally with donors, potential hires, and relevant information sources. Coordination is a multiplier on other resources because it allows them to be used more efficiently.

These resources support each other. For instance, money is more valuable if there is a large pool of potential hires; ideas and information are more valuable if there are well-equipped teams ready to work on the basis of that information; and coordination is more important if there is a larger community of workers, donors, and researchers.

Money allocation is higher priority than increasing total funding

We believe that CEA can currently be most useful by allocating money to the right projects rather than by bringing in more donors.

The EA Community has been fortunate enough to attract large amounts of funding as it as grown. This means that currently money is a less salient bottleneck on what we can accomplish. Ben Todd argued in November 2015 that EA is more talent constrained than funding constrained: that we have a greater shortage of very talented people than of money. This change occurred largely because of the Open Philanthropy Project, a foundation which is closely aligned with the effective altruism community and grants hundreds of millions of dollars per year.

We think that the community continues to benefit from some people focused on earning-to-give, and others working directly on important problems. 80,000 Hours’ career guide gives the best advice on this question for individuals. Roughly, we think that if an individual is a good fit to work on the most important problems, this should probably be their focus, even if they have a high earning potential.

If direct work is not a good fit, individuals can continue to have a significant impact through donations. We will always need money to achieve positive outcomes in the world and so we continue to encourage donations as an important way anyone can be involved, even though this isn’t our current priority.

Talent is high variance

We believe that some people have the potential to be disproportionately impactful.

It appears as though some people may be many orders of magnitude more impactful than average just by virtue of the resources (money, skills, network) they have available. This is discussed in more detail in our post on a three-factor model of community building.

We can think of the amount of good someone can be expected to do as being the product of three factors (in a mathematical sense):

  1. Resources: The extent of the resources (money, useful labor, etc.) they have to offer;
  2. Dedication: The proportion of these resources that are devoted to helping;
  3. Realization: How efficiently the resources devoted to helping are used.

The differences between the very best and the average are large on these criteria. The differences appear to be particularly large on the resource criterion. However, the other two criteria may be easier to influence, which is a reason to focus on them.

Although we believe that the above is true, we are aware of some tensions this implies for our community-building work. For instance, we want to focus our efforts on individuals we think are likely to have an outsized impact, but we also want to build a diverse and welcoming community.

For more discussion of how much individual expected impact varies, please see the second section of our three-factor model writeup.

Ideas are important, and difficult to spread reliably

We believe that for people to contribute to long-lasting positive effects, they need to understand a set of complex ideas. It is difficult to accurately spread complex ideas.

Because we believe that most of our impact is on the long-term future, and because efforts to help the long-term future are relatively underexplored, we need to build a community of people who can operate reliably in underexplored domains. They will need to do jobs for which there may not be established duties or ways of working. For instance, they will need to do research in fields where there are not established research paradigms, such as AI strategy, or to establish processes for new organizations. To operate so autonomously, they will need to have a good understanding of the overall strategy for helping the future, as well as domain expertise in the problems they’re working on.

Thus, effective altruism has a very complicated and difficult message. As Kerry Vaughan shows in the fidelity model of spreading ideas, this means that the effective altruism message often gets distorted in the mass media, in online discussions, and among people discussing EA.

Therefore, we need to be careful about what ideas we share, and we need to share them carefully. As Kerry points out, to properly convey the EA message, we may need to use long-form communication. In-person communication may also be uniquely useful to convey ideas with high fidelity. Lastly, people who live at an EA hub like Berkeley naturally come into more contact with other knowledgeable community members, which tends to improve understanding of key ideas.

Coordination increases impact

We believe that individuals will have a greater impact if they coordinate with the community, rather than acting alone.

CEA believes in what may be termed The Community Model: that a tightly connected group of people can have a much greater impact than the same number of people acting independently. This means that when people start projects, they should make sure that they are not interfering with other, more valuable projects. Similarly, they should not just think about how they can maximize their own impact, but also about how they can help others to have a greater impact. On this model, the effective altruism community ought to be a tightly knit network of people who trust and like each other, and who coordinate constantly. Of course, this doesn’t mean that they should always agree: they should discuss controversial issues collaboratively, and should work on a variety of approaches if they can’t agree on which is best.

The Community Model can be contrasted with what may be termed The Individualist Model (which seems to be the default model of many newcomers). On this model, individual effective altruists, or small groups, can (or perhaps even should) pursue effective altruist projects effectively without coordinating with other effective altruists.

There are many reasons why CEA prefers The Community Model over The Individualist Model. Principally, we believe that our preference follows from standard economic arguments about gains from specialization and coordination. We have also observed that projects run by people who are deeply immersed in the community (e.g., at hubs such as Oxford or the Bay Area) are disproportionately valuable (although this may be due to certain biases we have). We have also noticed that failing to coordinate frequently leads to problems. Conversely, we have noticed that when the community comes together, for instance at conferences, productive discussions and connections result.

CEA should play a part in making sure that the EA community becomes more cohesive and densely networked. It should act as the central coordination point in EA, and make sure that projects which are naturally centralized happen.

Previously CEA followed a “startup incubator model” (helping incubate organizations such as Animal Charity Evaluators, The Life You Can Save, and 80,000 Hours) as a way to help the community coordinate. Over the last couple of years, we’ve turned our focus toward helping community members coordinate through in-person events, in local communities, and online.

Relevant readings on this topic include the last part of 80,000 Hours’s career guide, Ben Todd’s The Value of Coordination, Kerry Vaughan’s Improving the Effective Altruism Network, and Bostrom’s et al The Unilateralist’s Curse (which advocates a “principle of conformity” for decisions, e.g., regarding sensitive information).

Coordination is aided by a strong culture

We believe that a high impact community needs to have good norms.

Norms allow us to more efficiently cooperate with each other to reach good outcomes, and also affect people’s experience of the community.

These norms and virtues are of two kinds:

  1. Epistemic norms, regarding intellectual honesty, intellectual humility, curiosity, rigor, openness to criticism, openness to changing your mind, etc.
  2. Non-epistemic norms such as friendliness, modesty, collaborativeness, helpfulness, diligence, etc.

The positive effects of norms and virtues are often diffuse and hard to measure. By contrast, it may sometimes seem that breaking norms - e.g., to skew the truth for reasons of persuasion - has clear positive effects. However, we believe that doing so has pernicious indirect effects, and that we for that reason should promote the above norms both within CEA and the wider EA community. (See for instance the classic business saying “culture eats strategy for breakfast”.)

More on the above argument can be found in Stefan Schubert’s, Ben Garfinkel’s, and Owen Cotton-Barratt’s Considering Considerateness: Why communities of do-gooders should be exceptionally considerate (see also the list of references). See also Emily Tench’s analysis of important norms, and CEA’s guiding principles.

Preserving value

We believe that the EA community already has the potential to produce a lot of value. Therefore, as well as increasing the potential of the community, it is also important to avoid risks to it.

We might define a risk as an event or process which

  1. Causes the EA community to cease to exist; or
  2. Permanently and drastically lowers the value of the EA community.

This definition is intended to include opportunity costs: scenarios where we could have reached an excellent outcome, but only reach a good outcome.

We currently believe that some of the most important risks are:

  1. Insufficient diversity and/or a hostile environment: The EA community is limited if it can not attract people to help with our mission, regardless of their background. Failure to treat community members well increases the chance of losing members and of the community becoming more occupied with conflict.
  2. Reputation damage: Individuals or groups could carry out actions that harm the reputation of the community. Alternatively, distorted or over-simplified versions of some of our ideas could become associated with the community, limiting our ability to coordinate and engage with people who are new to the community.
  3. Dilution: An overly simplistic version of EA, or a less aligned group of individuals, could come to dominate the community, limiting our ability to focus on the most important problems.
  4. Poor online discussions: Poor discussions can mean that people are introduced to effective altruism in a way that puts them off, or which causes them to have false beliefs.
  5. Risky unilateral projects: As discussed under “Ideas are important”, lots of knowledge is needed to carry out projects in many of the areas that we care about. Less aligned individuals could cause direct harm via their projects, or, by occupying a particular space, could cause an opportunity cost by preventing others from working in the same area.
  6. EA fails to notice important opportunities:  We remain uncertain about how to reach our goal. If we miss an important consideration, we might severely restrict the potential of our community.

For discussions about reputation risks, see Stefan Schubert’s, Ben Garfinkel’s, and Owen Cotton-Barratt’s article on considerateness (further readings in the references). For discussions about how to find new ways of doing good, see Will MacAskill and Kerry Vaughan.

CEA’s projects

All of the above helps to motivate the projects that we are currently working on.

CEA’s projects can be viewed as working on different parts of the four community resources (money, talent, ideas, and coordination):

  1. Money: Bringing more money into the community is not our current focus. However, we are trying to encourage donors to think more about how they allocate their money via Giving What We Can and the EA Funds.
  2. Talent: Our online content aims to help people learn more about effective altruism. Local groups and events then allow them to discuss the ideas in more detail. Finally, our Individual Outreach Team works with promising individuals to help them identify how they can best contribute.
  3. Ideas: Our Content Team aims to write up and share core ideas in effective altruism. Our events are also opportunities to share EA ideas.
  4. Coordination: Online content and events attempt to promote the norms and networks that aid coordination in the community.

Finally, and crucially, we need to ensure that we don’t lose the value of any of the community’s resources. This means that we also track and respond to risks to the community.

To summarize, our prioritized projects are:

  • Research and content: We share core ideas in effective altruism via EffectiveAltruism.org.

  • Local group support****: We support in-person communities of individuals committed to learning more about effective altruism, and exploring how they can best use their careers to contribute.

  • Events

    • EAGx****: We support local groups to organize conferences aimed at introducing people to core ideas in effective altruism.
    • EA Global****: We aim  to help engaged community members build their networks and understanding.
  • Community health: We monitor and respond to risks to the community, and promote good norms.

These projects are aimed at different audiences. Roughly, the projects are ordered by the level of engagement of their intended audience. For instance, the content team is focused on people currently less engaged with the community (perhaps because they are newer), whilst the individual outreach team is focused on more engaged individuals.

In addition to our front-facing work, we have two support teams:

  • Operations: Provide vital logistical, financial, and office support.
  • Tech: Build and support our online resources and support CEA with automations.

In addition, we have some projects that are not so high priority, because they are focused on generating funding:

  • EA Funds: An easy way for people to donate to effective charities, which also runs donor lotteries.
  • Giving What We Can: A community of individuals who have pledged to donate 10% of their income to effective charities.

Conclusion

We remain uncertain about many of these ideas, but we wanted to share our current thinking in the hope that it would be useful to others, and that others could help to improve our thinking.

If you have questions or comments on any of the above, please get in touch.

Additional sources on EA strategy

Nick Beckstead: EA Global SF 2017: EA Community Building

Nick Bostrom: Crucial considerations and wise philanthropy

Owen Cotton-Barratt: How valuable is movement growth?

Will MacAskill: EA Global SF 2017 Opening and Closing talks

Kerry Vaughan: What the EA Community can learn from the rise of the neoliberals

Kerry Vaughan: The fidelity model of spreading ideas

Robert Wiblin: EA Global 2016: Making sense of long-term indirect effects



Footnotes

  1. The notion of an existential catastrophe is often equated with human extinction. It is therefore important to notice that events which permanently and drastically lower the value of the future also count as existential catastrophes.