Affiliate Disclosure: DirJournal is a directory of information. Some links are affiliate partners; we may receive commissions for referrals. We do not verify or endorse third-party business claims. Learn more

    AI Safety Research Labs

    Expert Guide: AI Safety Research Labs

    AI Safety Research Labs providers listed on DirJournal have been independently verified through our 5-Point Human Audit — a rigorous editorial process maintained since 2007. This directory serves as a definitive reference for comparing qualified ai safety research labs specialists by location, service scope, and verified credentials.

    Unlike automated aggregators, every listing below is manually reviewed for professional legitimacy, contact accuracy, and service quality. Our 19-year editorial legacy across 600+ industries ensures you're consulting a trusted, high-authority source.

    Verified Listings

    7

    Referring Domains

    55,000+

    Audit Status

    Human-Verified

    All Listings(7)

    A
    United States flagBerkeley, United States

    The Alignment Research Center (ARC) is a non-profit AI alignment research organization founded by former OpenAI alignment lead Paul Christiano and headquartered in Berkeley, California. ARC conducts theoretical and applied research on AI alignment with the mission of ensuring that future advanced AI systems are aligned with human interests. The organization originally housed the influential ARC Evaluations team (which spun off as METR ) and continues research on heuristic arguments, mechanistic anomaly detection, eliciting latent knowledge, and theoretical alignment foundations. Paul Christianos research at ARC including the original ARC Theory work has been highly influential in shaping modern alignment research methodology and theoretical foundations.

    Listed since Apr 2026·Verified 8 days ago
    A
    United Kingdom flagLondon, United Kingdom

    Apollo Research is an AI safety research organization specializing in evaluations and interpretability research for frontier AI models, founded and headquartered in London, United Kingdom. Founded by Marius Hobbhahn and Jeremie Scheurer, Apollo Research focuses on understanding deceptive alignment, situational awareness, scheming behaviors, and dangerous capabilities in advanced AI systems. The organization conducts pre-deployment evaluations for major AI labs including OpenAI and Anthropic, publishes influential research on AI deception and evaluation methodology, advises governments on AI policy including the UK AI Safety Institute, and contributes evaluations of frontier model behavior used in major AI safety announcements globally.

    Listed since Apr 2026·Verified 8 days ago
    C
    United Kingdom flagLondon, United Kingdom

    Conjecture is an applied AI safety research organization focused on solving the technical alignment problem and reducing existential risks from advanced AI systems, founded and headquartered in London, United Kingdom. Founded by Connor Leahy, Sid Black, and Gabriel Alfour, Conjecture conducts research on cognitive emulation (CoEm), interpretability, AI policy, and produces the COASCAS framework for principled AI alignment. The organization develops technical safety research, publishes influential papers on AI risk and alignment methodology, runs the Conjecture podcast, advises European governments and international AI policy bodies, and operates as one of the leading independent voices arguing for caution in frontier AI development globally.

    Listed since Apr 2026·Verified 8 days ago
    F
    United States flagBerkeley, United States

    FAR AI (Fund for Alignment Research) is a non-profit AI safety research organization, founded and headquartered in Berkeley, California. Founded by Adam Gleave, FAR AI conducts technical AI safety research, fiscal sponsorship of AI safety projects, AI safety field building, and operates the FAR AI labs research program studying neural network robustness, adversarial attacks on AI systems, scalable oversight, and emerging risks from frontier AI models. The organization publishes peer-reviewed safety research, hosts AI safety workshops including the influential AI Risk Forum, supports independent researchers through grants, and contributes empirical safety research to the broader AI safety community building knowledge essential for safer AI development globally.

    Listed since Apr 2026·Verified 8 days ago
    M

    The Machine Intelligence Research Institute (MIRI) is one of the worlds longest-established AI safety research organizations focused on existential risks from advanced AI, founded in 2000 and headquartered in Berkeley, California. Founded by Eliezer Yudkowsky as the Singularity Institute and renamed to MIRI, the organization conducts foundational research on the alignment problem, AI risk theory, and existential risk from artificial general intelligence. MIRI publishes influential books and papers including Eliezer Yudkowskys foundational AI safety writings, Nate Soares research, and runs the MIRI Communications program advocating for international policy responses to existential AI risks. The organization is widely credited with founding the modern AI safety field.

    Listed since Apr 2026·Verified 8 days ago
    M
    United States flagBerkeley, United States

    METR (Model Evaluation and Threat Research) is a leading AI safety research organization specializing in dangerous capabilities evaluations of frontier AI models, founded and headquartered in Berkeley, California. Originally launched as the Alignment Research Center Evaluations team led by Beth Barnes, METR partners with major AI labs including OpenAI, Anthropic, and Google DeepMind to conduct pre-deployment evaluations measuring autonomous task completion, agent capabilities, biorisk, cyberrisk, and AI research and development capabilities. METR publishes the influential Time Horizon benchmark tracking how long autonomous AI tasks AI agents can reliably complete, providing critical empirical data informing AI policy and frontier model deployment decisions globally.

    Listed since Apr 2026·Verified 8 days ago
    R
    United States flagBerkeley, United States

    Redwood Research is an AI safety research organization focused on technical alignment research and AI control, founded and headquartered in Berkeley, California. Founded by Nate Thomas, Buck Shlegeris, and Bill Zito, Redwood Research conducts empirical alignment research focused on AI control techniques designed to safely deploy AI systems even when alignment is uncertain, mechanistic interpretability research, model organisms research, and AI safety evaluations. The organization publishes influential research on AI control protocols, scalable oversight, and adversarial robustness, partners with major AI labs and the broader AI safety community, and contributes to setting research agendas for ensuring frontier AI systems remain controllable and safe through their deployment lifecycle.

    Listed since Apr 2026·Verified 8 days ago

    Directory Insights

    Expert answers curated by DirJournal's editorial team — updated for 2026.

    Operate in the AI Safety Research Labs space?

    Join 30,000+ businesses on a 19-year-old authority platform. One payment. Lifetime SEO equity.

    Secure Your $249.95 Permanent Listing

    List Your Business

    Join 30,000+ verified businesses

    Get Listed →