Major AI Organizations
Last updated October 2023
Predicting first AGI Creator
- [30% likelihood] USA Government
- My personal view is that once weak AGI is developed, the US Government will mobilize quickly to leverage mankind’s last invention.
- As far as I know, there’s not publically available information on this scenario. This opinion is based solely on the belief that “Most USA major decision makers (politicians, DoD Generals etc.) want the USA to stay the global superpower.”
- This is further strengthened given that the major AI governing instititions are under the Executive Branch.
- [25% likelihood] Google
- Subsideraries include Deepmind, Anthropic, & Google Brain.
- [20% likelihood] China
- [15% likelihood] Microsoft
- Major funder for OpenAI
- [10% likelihood] Some other State or non-state actor
USA Government - Technology Emphasis
The non-exhaustive list of U.S. governing institutions below play varied roles in overseeing & advancing AI technologies.
- Intelligence Advanced Research Projects Activity (IARPA):
- IARPA, housed within the Office of the Director of National Intelligence, spearheads research to tackle challenges pertinent to the U.S. Intelligence Community. It addresses difficult issues across intelligence agencies, though it doesn’t have an operational mission nor directly deploy technologies to the field. IARPA invests in high-risk, high-payoff research, often exploring multidisciplinary approaches to advance cognition and computation understanding.
- National Science and Technology Council (NSTC):
- The NSTC, through its Select Committee on AI, coordinates federal efforts relating to AI R&D, as chartered by the White House in 2018. It acts as a cabinet-level council of advisers to the President on science and technology, helping in coordinating the S&T policy-making process.
- Chief Digital & Artificial Intelligence Office (CDAO):
- CDAO accelerates the Department of Defense’s adoption of data, analytics, and AI, reporting to the Deputy Secretary of Defense. It’s expected to release new guidance on the development and adoption of data analytics and AI to support mission-critical operations.
- Department of Defense (DoD) Responsible AI Working Council:
- Formed to ensure input and coordination across the Department for responsible AI, it developed a Responsible AI “Strategy and Implementation Pathway”.
- National Artificial Intelligence Initiative Office (NAIIO):
- Legislated to coordinate and support the National AI Initiative, it’s located within the White House Office of Science and Technology Policy. The office was established following the National Artificial Intelligence Initiative Act of 2020 to codify and expand existing AI policies and initiatives.
- Office of Science and Technology Policy (OSTP):
- OSTP explores both the promise and pitfalls of AI, advising the President on matters related to science and technology. It also unveiled a “Blueprint for an AI Bill of Rights” to guide the design, use, and deployment of AI systems.
Companies / Nonprofits
- Alignment Research Center
- Anthropic
- A public benefit corporation founded in 2021 by a group of former OpenAI researchers, described as “an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.”
- A broad range of interests at this point - watch this space!
- AI Impacts
- AI Objectives Institute
- Center for Security and Emerging Technology (CSET)
- Center for AI Safety (CAIS)
- Center on Long-Term Risk (CLR)
- Centre for the Governance of AI (GovAI)
- A new spin-out of the Future of Humanity Institute (FHI, see below). “We are building a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI”
- Cooperative AI Foundation
- DeepMind
- A deep learning company aiming to “build safe AI systems, … [solve] intelligence, advance science and benefit humanity”. Founded in 2010 and bought by Alphabet Inc. (Google’s parent company) in 2014.
- They have demonstrated the power of deep reinforcement learning to beat human experts in games (AlphaGo, AlphaZero, etc), and are starting to make inroads in scientific discovery and medicine with e.g. AlphaFold (which tangibly accelerated the field of protein folding).
- They are largely a technical company with a safety & ethics branch, and their policy research team includes Alan Dafoe, leader of GovAI (see below).
- Fathom Radiant
- Public benefit company building hardware to enable beneficial machine intelligence.
- Fund for Alignment Research (FAR)
- Future of Life Institute
- Machine Intelligence Research Institute (MIRI)
- Foundational mathematical research non-profit institute, focused on making superintelligence go well. Leaders in the field of safety on such topics as learned optimisation (inner misalignment), and embedded agency.
- Nonlinear
- Meta/grant making organisation. “We systematically search for high impact strategies to reduce existential and suffering risks… [and] make it happen. An incubator for interventions.”
- Nonlinear Network is their funding arm.
- OpenAI
- OpenAI was founded “to ensure that artificial general intelligence benefits all of humanity”. They openly commit to building safe AGI and have created some of the most impressive examples of AI systems today, such as the large language model GPT-3 (now open access) and CLIP.
- Largely a technical company, they have a dedicated “alignment” team and a governance branch called the “AI Futures” team. Originally not-for-profit, it now has a for-profit arm.
- Ought
- Working on advanced reasoning in AI systems. Was founded by safety-aligned people.
- Partnership on AI
- Redwood Research
- Founded in 2021, its mission is “to align superhuman AI”, which it openly believes is more likely than not to be developed than not this century. They are currently internalising human values into language models, and doing some field-building work such as the ML for Alignment Bootcamp.
Academic institutions
- Center for Human-Compatible AI (CHAI)
- A research institute in Berkeley, California. Led by Stuart Russell, long term advocate of the problem of control in AI and author of a best-selling AI textbook. They led the charge on inverse reinforcement learning, and are interested in a wide range of control-focused projects.
- Centre for the Study of Existential Risk (CSER)
- A research institute in Cambridge, UK, “dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse”. They are largely governance-focused and consider the complex interactions of global risks.
- Future of Humanity Institute (FHI)
- A research institute in Oxford, UK. They collaborate with Oxford DPhil students - Oxford’s name for PhD students - and house academics. Their interests are broad, spanning governance and alignment, and “include idealized reasoning, the incentives of AI systems, and the limitations of value learning.”
- David Krueger’s Research Group (University of Cambridge)
- Set up in 2021. Krueger is interested in many alignment-related topics, and is an accomplished ML researcher.
- Dylan Hadfield-Menell
Major Funding bodies
- Effective Altruism Funds (EA Funds)
- Future Funding List: Updated table with filters for “Funding for long-term-oriented people and projects”
- Founder’s Pledge
- Longview Philanthropy
- Open Philanthropy (Open Phil)
- Survival and Flourishing Funds (SFP and SFF)
Other organizations (mix of safety & capabilities)
- Aligned AI
- ALTER
- Arkrose
- AI Infrastructure Alliance
- ARENA
- Apollo Research (Marius Hobbhahn)
- AI Incidents
- AI Safety Support (AISS)
- Runs AI Safety Camp and provides career advice & community building.
- Allen Institute for AI
- Aleph Alpha
- Berkeley Existential Risk Initiative (BERI)
- Cohere
- Conjecture
- Convergence
- EleutherAI
- EnculturedAI
- Forefront
- Global Catastrophic Risk Institute (GCRI)
- Global Priorities Institute
- Google Brain
- Hugging Face
- Institute for Artificial Intelligence and Fundamental Interactions (IAIFI)
- Pursues massive decentralised research projects, mostly replicating state of the art large ML systems (e.g. their GPT-J). Their mission is to make the benefits of advanced AI open source and in individuals’ hands. They have access to compute from CoreWeave.
- Lambda Labs for decentralized compute.
- Leverhulme Centre for the Future of Intelligence (CFI)
- Median Group
- Obelisk
- Preamble
- Render
- RunPod for decentralized compute.
- Stanford Existential Risk Institute
- Stanford Human-Centered Artificial Intelligence (HAI)
- Vast for decentralized compute.
- Weights & Biases
Misc. Resources
The above list is mainly from BlueDot’s AI Safety Resources page with added commentary by Jamie Bernardi. The lists are in alphabetical order with the organizations full name (not using acroymns for ordering) where I’ve added several others. I’ll likely re-organize these at a later time.