(Preamble)
We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
2. Long Term Safety
Long Term safety
2. Long Term Safety
We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
2. Long Term Safety
We are concerned about late stage AGI development becoming a competitive race without time for adequate safety precautions.
2. Long Term Safety
Therefore, if a value aligned, safety conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.
3. Technical Leadership
To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities — policy and safety advocacy alone would be insufficient.
4. Cooperative Orientation
Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
4. Cooperative Orientation
Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.