AI Safety Connect

A platform to systematically map AI safety research and reduce the gap between academia and community.

  • SPAR · Supervised Program for Alignment Research
  • International team
  • Final report available
  • 5 co-researchers

Summary

What is AI Safety Connect?

The project was developed within the Supervised Program for Alignment Research (SPAR), an international program that connects emerging talent with established researchers in alignment. AI Safety Mexico participated as the Mexican host of the collaboration that produced this final report.

The core hypothesis of the project is that there is a structural coordination gap between those producing AI safety research and those consuming it to design policy, products, or training programs. A platform with a consistent taxonomy and semantic search lowers the cost of discovery and opens the door to articulating shared agendas.

Methodology

How it is built

The platform relies on three complementary components: structured taxonomy, semantic search, and scalable data pipelines.

Objective
Minimize the structural coordination gap between academic research and AI safety communities.
Methodology
An integrated platform that maps authors, publications, and thematic areas through a structured taxonomy, semantic search, and scalable data pipelines.
International team
Julius A. Odai, Ihor Kendiukhov, Tim Sankara, Jakub K. Nowak, Kailer Laino

Publication

Final report

The document synthesizes the findings, design decisions, and future work of the SPAR collaboration.

Your browser cannot display the embedded PDF. Open the report in a new tab .

International team

Co-researchers

The SPAR collaboration brought together a geographically distributed team with experience in alignment, evaluations, and data infrastructure.

Co-researcher
Julius A. Odai
Co-researcher
Ihor Kendiukhov
Co-researcher
Tim Sankara
Co-researcher
Jakub K. Nowak
Co-researcher
Kailer Laino

Acknowledgements

Programs and partners

AI Safety Connect exists thanks to the SPAR — Supervised Program for Alignment Research program, which brought together the international team and provided academic mentorship. We thank the mentors who accompanied the methodological design and the review of results, as well as the communities of practice that provided feedback during development.

AI Safety Mexico participated as the Mexican host, connecting the team with local partners and bringing a Mexican perspective to the design of the mapping.

Read the full report

The final report documents findings, design decisions, and future work.

Open the PDF