Enter password to view Website Audit

Analysis

Website

Center for AI Safety (CAIS)

Analysis

Website

Center for AI Safety (CAIS)

Analysis

Website

Center for AI Safety (CAIS)

Published on

2026-03-24

For

Center for AI Safety (CAIS)

Score

19

Nonprofit AI safety research and field-building organization. Founded 2022 by Dan Hendrycks (Director, creator of MMLU benchmark, one of the world's most cited AI safety researchers) and Oliver Zhang. Mission: reduce societal-scale risks from AI through research, field-building, and advocacy. Key outputs: (1) Global Statement on AI Risk (May 2023) — 22 words signed by 600+ researchers + Sam Altman, Bill Gates, Elon Musk; (2) 'An Overview of Catastrophic AI Risks' paper; (3) Mutual Assured AI Malfunction (MAIM) paper (co-authored with Eric Schmidt and Alexandr Wang); (4) AI Safety Newsletter (ML Safety Newsletter); (5) AI Frontiers program (August 2025) for policymakers and business leaders; (6) Free online course 'AI Safety, Ethics, and Society'; (7) Compute cluster supporting ~20 research labs. CAIS Action Fund (advocacy arm). Staff includes researchers, field-builders, advisors. Dan Hendrycks's paper co-authored with Eric Schmidt and Alexandr Wang on MAIM strategy.

Market

AI Safety / AI Risk Research / AI Policy / Nonprofit Research / Technical AI Safety

Audience

AI safety researchers seeking compute resources and research funding; policymakers and government officials needing AI risk frameworks; students and professionals seeking AI safety education; corporate decision-makers evaluating AI governance; journalists and academics covering AI risk

ContentContentContentContentContentContentStrategySEOContentFreshness

Content

5

Content

8

Content

10

Content

13

Content

16

Content

20

Strategy

24

SEO

27

Content

30

Freshness

33

Content

Global Statement on AI Risk — Signed by 600+ Researchers + Sam Altman + Bill Gates + Elon Musk — Not in Hero

Score

5

Severity

High

Finding

Wikipedia and CAIS About confirm: 'Global Statement on AI Risk signed by 600 leading AI researchers and public figures including Sam Altman, Bill Gates, Elon Musk.' The 22-word statement became the most-cited AI safety document in history — and it was organized and published by CAIS.

Recommendation

Feature the statement: '600+ leading AI researchers and public figures — including Sam Altman, Bill Gates, and Elon Musk — signed CAIS's statement on AI risk. The most significant public consensus document on AI safety ever produced. [Read the statement →]'

Content

Global Statement on AI Risk — Signed by 600+ Researchers + Sam Altman + Bill Gates + Elon Musk — Not in Hero

Score

5

Severity

High

Finding

Wikipedia and CAIS About confirm: 'Global Statement on AI Risk signed by 600 leading AI researchers and public figures including Sam Altman, Bill Gates, Elon Musk.' The 22-word statement became the most-cited AI safety document in history — and it was organized and published by CAIS.

Recommendation

Feature the statement: '600+ leading AI researchers and public figures — including Sam Altman, Bill Gates, and Elon Musk — signed CAIS's statement on AI risk. The most significant public consensus document on AI safety ever produced. [Read the statement →]'

Content

Global Statement on AI Risk — Signed by 600+ Researchers + Sam Altman + Bill Gates + Elon Musk — Not in Hero

Score

5

Severity

High

Finding

Wikipedia and CAIS About confirm: 'Global Statement on AI Risk signed by 600 leading AI researchers and public figures including Sam Altman, Bill Gates, Elon Musk.' The 22-word statement became the most-cited AI safety document in history — and it was organized and published by CAIS.

Recommendation

Feature the statement: '600+ leading AI researchers and public figures — including Sam Altman, Bill Gates, and Elon Musk — signed CAIS's statement on AI risk. The most significant public consensus document on AI safety ever produced. [Read the statement →]'

Content

AI Frontiers Launch (August 2025) — New Program — Not in Hero

Score

8

Severity

High

Finding

The ODW case study confirms: 'August 2025: We launched AI Frontiers.' AI Frontiers is CAIS's new program for engaging policymakers, business leaders, and the public on AI governance — the most recent strategic initiative.

Recommendation

Feature AI Frontiers: 'Introducing AI Frontiers (August 2025): CAIS's program to bring AI safety research into policy and business decisions. From Capitol Hill to corporate boardrooms, AI Frontiers equips decision-makers with the frameworks they need. [Explore AI Frontiers →]'

Content

AI Frontiers Launch (August 2025) — New Program — Not in Hero

Score

8

Severity

High

Finding

The ODW case study confirms: 'August 2025: We launched AI Frontiers.' AI Frontiers is CAIS's new program for engaging policymakers, business leaders, and the public on AI governance — the most recent strategic initiative.

Recommendation

Feature AI Frontiers: 'Introducing AI Frontiers (August 2025): CAIS's program to bring AI safety research into policy and business decisions. From Capitol Hill to corporate boardrooms, AI Frontiers equips decision-makers with the frameworks they need. [Explore AI Frontiers →]'

Content

AI Frontiers Launch (August 2025) — New Program — Not in Hero

Score

8

Severity

High

Finding

The ODW case study confirms: 'August 2025: We launched AI Frontiers.' AI Frontiers is CAIS's new program for engaging policymakers, business leaders, and the public on AI governance — the most recent strategic initiative.

Recommendation

Feature AI Frontiers: 'Introducing AI Frontiers (August 2025): CAIS's program to bring AI safety research into policy and business decisions. From Capitol Hill to corporate boardrooms, AI Frontiers equips decision-makers with the frameworks they need. [Explore AI Frontiers →]'

Content

20 Research Labs Supported by CAIS Compute Cluster — Research Infrastructure Scale Not in Hero

Score

10

Severity

Medium

Finding

The CAIS About page confirms: 'Oversees the compute cluster, which supports c. 20 research labs.' Providing compute infrastructure for 20 external research labs means CAIS is not just a research organization — it's the infrastructure backbone of the AI safety research ecosystem.

Recommendation

Feature the research infrastructure: 'CAIS provides compute infrastructure for 20+ AI safety research labs worldwide. By removing the compute barrier to AI safety research, we accelerate the entire field — not just our own work. [Research infrastructure →]'

Content

20 Research Labs Supported by CAIS Compute Cluster — Research Infrastructure Scale Not in Hero

Score

10

Severity

Medium

Finding

The CAIS About page confirms: 'Oversees the compute cluster, which supports c. 20 research labs.' Providing compute infrastructure for 20 external research labs means CAIS is not just a research organization — it's the infrastructure backbone of the AI safety research ecosystem.

Recommendation

Feature the research infrastructure: 'CAIS provides compute infrastructure for 20+ AI safety research labs worldwide. By removing the compute barrier to AI safety research, we accelerate the entire field — not just our own work. [Research infrastructure →]'

Content

20 Research Labs Supported by CAIS Compute Cluster — Research Infrastructure Scale Not in Hero

Score

10

Severity

Medium

Finding

The CAIS About page confirms: 'Oversees the compute cluster, which supports c. 20 research labs.' Providing compute infrastructure for 20 external research labs means CAIS is not just a research organization — it's the infrastructure backbone of the AI safety research ecosystem.

Recommendation

Feature the research infrastructure: 'CAIS provides compute infrastructure for 20+ AI safety research labs worldwide. By removing the compute barrier to AI safety research, we accelerate the entire field — not just our own work. [Research infrastructure →]'

Content

AI Safety, Ethics, and Society Course — Free Online Education — Field Building Not in Hero

Score

13

Severity

Medium

Finding

LinkedIn confirms the course: 'AI Safety, Ethics, and Society course — free, fully online, ~5 hours/week, no prior AI/ML experience required, certificate of completion.' Free education at scale for non-technical audiences is CAIS's most accessible field-building tool.

Recommendation

Feature the course: 'AI Safety, Ethics, and Society — free online course. No AI/ML experience required. 8 weeks of interactive discussion + readings. Certificate of completion. Open to everyone who cares about how AI shapes the world. [Apply for the course →]'

Content

AI Safety, Ethics, and Society Course — Free Online Education — Field Building Not in Hero

Score

13

Severity

Medium

Finding

LinkedIn confirms the course: 'AI Safety, Ethics, and Society course — free, fully online, ~5 hours/week, no prior AI/ML experience required, certificate of completion.' Free education at scale for non-technical audiences is CAIS's most accessible field-building tool.

Recommendation

Feature the course: 'AI Safety, Ethics, and Society — free online course. No AI/ML experience required. 8 weeks of interactive discussion + readings. Certificate of completion. Open to everyone who cares about how AI shapes the world. [Apply for the course →]'

Content

AI Safety, Ethics, and Society Course — Free Online Education — Field Building Not in Hero

Score

13

Severity

Medium

Finding

LinkedIn confirms the course: 'AI Safety, Ethics, and Society course — free, fully online, ~5 hours/week, no prior AI/ML experience required, certificate of completion.' Free education at scale for non-technical audiences is CAIS's most accessible field-building tool.

Recommendation

Feature the course: 'AI Safety, Ethics, and Society — free online course. No AI/ML experience required. 8 weeks of interactive discussion + readings. Certificate of completion. Open to everyone who cares about how AI shapes the world. [Apply for the course →]'

Content

Dan Hendrycks (Founder) — MMLU + Safety Research Pioneer — Founder Credibility Not in Hero

Score

16

Severity

Medium

Finding

CAIS was founded by Dan Hendrycks — the researcher behind MMLU (Massive Multitask Language Understanding), one of the most widely used AI benchmarks in the world. Hendrycks is among the most cited AI safety researchers globally.

Recommendation

Feature the founder: 'Founded by Dan Hendrycks — creator of MMLU, one of the most widely used AI benchmarks in the world, and one of the most cited AI safety researchers globally. CAIS was built on the conviction that AI safety is the defining challenge of our generation. [About our team →]'

Content

Dan Hendrycks (Founder) — MMLU + Safety Research Pioneer — Founder Credibility Not in Hero

Score

16

Severity

Medium

Finding

CAIS was founded by Dan Hendrycks — the researcher behind MMLU (Massive Multitask Language Understanding), one of the most widely used AI benchmarks in the world. Hendrycks is among the most cited AI safety researchers globally.

Recommendation

Feature the founder: 'Founded by Dan Hendrycks — creator of MMLU, one of the most widely used AI benchmarks in the world, and one of the most cited AI safety researchers globally. CAIS was built on the conviction that AI safety is the defining challenge of our generation. [About our team →]'

Content

Dan Hendrycks (Founder) — MMLU + Safety Research Pioneer — Founder Credibility Not in Hero

Score

16

Severity

Medium

Finding

CAIS was founded by Dan Hendrycks — the researcher behind MMLU (Massive Multitask Language Understanding), one of the most widely used AI benchmarks in the world. Hendrycks is among the most cited AI safety researchers globally.

Recommendation

Feature the founder: 'Founded by Dan Hendrycks — creator of MMLU, one of the most widely used AI benchmarks in the world, and one of the most cited AI safety researchers globally. CAIS was built on the conviction that AI safety is the defining challenge of our generation. [About our team →]'

Content

Mutual Assured AI Malfunction (MAIM) Paper — National Security Strategy Not in Hero

Score

20

Severity

Medium

Finding

CAIS Research confirms a paper co-authored by Dan Hendrycks, Eric Schmidt, and Alexandr Wang: 'this paper analyzes the strategic challenges posed by superintelligence from a national security perspective, proposing a three-part strategy: deterrence through Mutual Assured AI Malfunction (MAIM), nonproliferation, and enhancing competitiveness.' A paper co-authored with Eric Schmidt (ex-Google CEO) and Alexandr Wang (Scale AI founder, Meta Chief AI Officer) is extraordinary institutional reach.

Recommendation

Feature the MAIM paper: 'Dan Hendrycks, Eric Schmidt, and Alexandr Wang: On the National Security Implications of Superintelligence. CAIS research shapes policy at the highest levels of government and industry. [Read the research →]'

Content

Mutual Assured AI Malfunction (MAIM) Paper — National Security Strategy Not in Hero

Score

20

Severity

Medium

Finding

CAIS Research confirms a paper co-authored by Dan Hendrycks, Eric Schmidt, and Alexandr Wang: 'this paper analyzes the strategic challenges posed by superintelligence from a national security perspective, proposing a three-part strategy: deterrence through Mutual Assured AI Malfunction (MAIM), nonproliferation, and enhancing competitiveness.' A paper co-authored with Eric Schmidt (ex-Google CEO) and Alexandr Wang (Scale AI founder, Meta Chief AI Officer) is extraordinary institutional reach.

Recommendation

Feature the MAIM paper: 'Dan Hendrycks, Eric Schmidt, and Alexandr Wang: On the National Security Implications of Superintelligence. CAIS research shapes policy at the highest levels of government and industry. [Read the research →]'

Content

Mutual Assured AI Malfunction (MAIM) Paper — National Security Strategy Not in Hero

Score

20

Severity

Medium

Finding

CAIS Research confirms a paper co-authored by Dan Hendrycks, Eric Schmidt, and Alexandr Wang: 'this paper analyzes the strategic challenges posed by superintelligence from a national security perspective, proposing a three-part strategy: deterrence through Mutual Assured AI Malfunction (MAIM), nonproliferation, and enhancing competitiveness.' A paper co-authored with Eric Schmidt (ex-Google CEO) and Alexandr Wang (Scale AI founder, Meta Chief AI Officer) is extraordinary institutional reach.

Recommendation

Feature the MAIM paper: 'Dan Hendrycks, Eric Schmidt, and Alexandr Wang: On the National Security Implications of Superintelligence. CAIS research shapes policy at the highest levels of government and industry. [Read the research →]'

Strategy

Nonprofit + Research + Field-Building + Policy — Four Functions — Unclear Primary Entry Point

Score

24

Severity

Low

Finding

CAIS does four things simultaneously: technical research, field-building (compute grants, fellowships), policy advocacy, and education. The homepage does not clearly route different visitors — researchers, policymakers, students, corporate donors — to their relevant entry point.

Recommendation

Add audience navigation: 'For researchers: Apply for compute resources or research funding → For policymakers: AI Frontiers briefings and policy frameworks → For students: Free AI safety courses → For organizations: Advisory and consulting programs. [Find your path →]'

Strategy

Nonprofit + Research + Field-Building + Policy — Four Functions — Unclear Primary Entry Point

Score

24

Severity

Low

Finding

CAIS does four things simultaneously: technical research, field-building (compute grants, fellowships), policy advocacy, and education. The homepage does not clearly route different visitors — researchers, policymakers, students, corporate donors — to their relevant entry point.

Recommendation

Add audience navigation: 'For researchers: Apply for compute resources or research funding → For policymakers: AI Frontiers briefings and policy frameworks → For students: Free AI safety courses → For organizations: Advisory and consulting programs. [Find your path →]'

Strategy

Nonprofit + Research + Field-Building + Policy — Four Functions — Unclear Primary Entry Point

Score

24

Severity

Low

Finding

CAIS does four things simultaneously: technical research, field-building (compute grants, fellowships), policy advocacy, and education. The homepage does not clearly route different visitors — researchers, policymakers, students, corporate donors — to their relevant entry point.

Recommendation

Add audience navigation: 'For researchers: Apply for compute resources or research funding → For policymakers: AI Frontiers briefings and policy frameworks → For students: Free AI safety courses → For organizations: Advisory and consulting programs. [Find your path →]'

SEO

'AI Safety Research Organization' / 'AI Existential Risk' / 'Center for AI Safety vs FLI' — Category Terms

Score

27

Severity

Low

Finding

CAIS's primary search terms: 'AI safety research nonprofit,' 'existential risk from AI organizations,' 'AI safety statement 2023 2024,' 'Center for AI Safety vs Future of Life Institute.' These come from researchers, journalists, policymakers, and students seeking to understand the AI safety landscape.

Recommendation

Create comparison content: 'CAIS vs. Other AI Safety Organizations: CAIS focuses on technical safety research and societal-scale risk reduction. We are not an AI policy think tank (that's CSIS), not an alignment lab within an AI company (that's Anthropic/DeepMind safety teams), and not a futurist organization (that's FLI). We are a neutral research nonprofit. [What makes CAIS different →]'

SEO

'AI Safety Research Organization' / 'AI Existential Risk' / 'Center for AI Safety vs FLI' — Category Terms

Score

27

Severity

Low

Finding

CAIS's primary search terms: 'AI safety research nonprofit,' 'existential risk from AI organizations,' 'AI safety statement 2023 2024,' 'Center for AI Safety vs Future of Life Institute.' These come from researchers, journalists, policymakers, and students seeking to understand the AI safety landscape.

Recommendation

Create comparison content: 'CAIS vs. Other AI Safety Organizations: CAIS focuses on technical safety research and societal-scale risk reduction. We are not an AI policy think tank (that's CSIS), not an alignment lab within an AI company (that's Anthropic/DeepMind safety teams), and not a futurist organization (that's FLI). We are a neutral research nonprofit. [What makes CAIS different →]'

SEO

'AI Safety Research Organization' / 'AI Existential Risk' / 'Center for AI Safety vs FLI' — Category Terms

Score

27

Severity

Low

Finding

CAIS's primary search terms: 'AI safety research nonprofit,' 'existential risk from AI organizations,' 'AI safety statement 2023 2024,' 'Center for AI Safety vs Future of Life Institute.' These come from researchers, journalists, policymakers, and students seeking to understand the AI safety landscape.

Recommendation

Create comparison content: 'CAIS vs. Other AI Safety Organizations: CAIS focuses on technical safety research and societal-scale risk reduction. We are not an AI policy think tank (that's CSIS), not an alignment lab within an AI company (that's Anthropic/DeepMind safety teams), and not a futurist organization (that's FLI). We are a neutral research nonprofit. [What makes CAIS different →]'

Content

AI Safety Newsletter — Regular Publication — Community Building Not in Hero

Score

30

Severity

Low

Finding

CAIS.ai confirms the AI Safety Newsletter as a regular publication. A newsletter with a large technical audience is a primary community-building asset that signals CAIS's reach beyond academic publications.

Recommendation

Feature the newsletter: 'The AI Safety Newsletter: Regular coverage of technical AI safety research, policy developments, and field-building news. Join thousands of researchers, policymakers, and technologists who follow CAIS's work. [Subscribe →]'

Content

AI Safety Newsletter — Regular Publication — Community Building Not in Hero

Score

30

Severity

Low

Finding

CAIS.ai confirms the AI Safety Newsletter as a regular publication. A newsletter with a large technical audience is a primary community-building asset that signals CAIS's reach beyond academic publications.

Recommendation

Feature the newsletter: 'The AI Safety Newsletter: Regular coverage of technical AI safety research, policy developments, and field-building news. Join thousands of researchers, policymakers, and technologists who follow CAIS's work. [Subscribe →]'

Content

AI Safety Newsletter — Regular Publication — Community Building Not in Hero

Score

30

Severity

Low

Finding

CAIS.ai confirms the AI Safety Newsletter as a regular publication. A newsletter with a large technical audience is a primary community-building asset that signals CAIS's reach beyond academic publications.

Recommendation

Feature the newsletter: 'The AI Safety Newsletter: Regular coverage of technical AI safety research, policy developments, and field-building news. Join thousands of researchers, policymakers, and technologists who follow CAIS's work. [Subscribe →]'

Freshness

AI Frontiers August 2025 — Most Recent Program — Not Featured

Score

33

Severity

Low

Finding

AI Frontiers (August 2025) is CAIS's most recent programmatic launch, signaling a strategic expansion from pure research into policy and business engagement.

Recommendation

Update the homepage with AI Frontiers: 'August 2025: CAIS launches AI Frontiers — bringing AI safety frameworks to policymakers, executives, and business leaders who shape how AI is deployed at scale. [About AI Frontiers →]'

Freshness

AI Frontiers August 2025 — Most Recent Program — Not Featured

Score

33

Severity

Low

Finding

AI Frontiers (August 2025) is CAIS's most recent programmatic launch, signaling a strategic expansion from pure research into policy and business engagement.

Recommendation

Update the homepage with AI Frontiers: 'August 2025: CAIS launches AI Frontiers — bringing AI safety frameworks to policymakers, executives, and business leaders who shape how AI is deployed at scale. [About AI Frontiers →]'

Freshness

AI Frontiers August 2025 — Most Recent Program — Not Featured

Score

33

Severity

Low

Finding

AI Frontiers (August 2025) is CAIS's most recent programmatic launch, signaling a strategic expansion from pure research into policy and business engagement.

Recommendation

Update the homepage with AI Frontiers: 'August 2025: CAIS launches AI Frontiers — bringing AI safety frameworks to policymakers, executives, and business leaders who shape how AI is deployed at scale. [About AI Frontiers →]'

Frequently asked

What kind of companies do you work with?

We work with ambitious tech companies typically Series A and B at the moment where the brand and website haven't kept pace with the business.

You've found product-market fit. Now you need to look the part, communicate clearly, and move fast enough to stay ahead.

That's the problem we're built for.

What does a typical project look like?
We've had bad experiences with agencies before. What's different?
Why Framer over other platforms?
How do we get started?
How does pricing work?

Recent work

V7 Labs
Enzai
Utila
Centific
Buena
trawa
Portex Global
Othello AI
Echo
Pools
Contentcloud
Wilson

Perspectives & Insights

Blazing fast brands &

Blazing fast brands &

Blazing fast brands &

websites for startups

websites for startups

websites for startups