2. Long Term Safety

Publisher: OpenAI

We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value aligned, safety conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case by case agreements, but a typical triggering condition might be “a better than even chance of success in the next two years.”