JobTarget Logo

AI Safety Research Scientist in San Francisco, California at Partnership on AI

Job Function: Science
Partnership on AI
San Francisco, California, 94114, United States
Posted on

Explore Related Opportunities

Job Description

WHO WE ARE

The Partnership on AI (PAI) is a nonprofit community of academic, civil society, industry, and media organizations who come together to address the most important questions concerning the future of AI. Established in 2016 by leading AI researchers

representing seven of the world's largest technology companies: Apple, Amazon, DeepMind, Google, Meta, IBM, and Microsoft, PAI has evolved into a diverse global community committed to ensuring AI technologies benefit humanity.

We work with our Partners on the voluntary adoption of best practices and with governments to advance policy innovation. Here"s how we work:

Convening diverse stakeholders across the world, including experts and communities most impacted by AICreating influential, accessible resources and recommendations to shape responsible AI developmentPublishing progress reports that track improvements in Partner practices and policy developments

ROLE SUMMARY

PAI is seeking a Research Scientist to join our Programs and Research team. This role focuses on technical AI governance of advanced AI systems – conducting research to understand how different safety approaches work in practice and proposing new ways to implement them effectively. Our Research Scientists play a highly visible role, collaborating with Partners to shape how advanced AI systems are developed and deployed.

In this role, you"ll lead technical research to assess and enhance feasibility of governance options being explored in public policy discussions on AI/AGI safety. This includes evaluating the effectiveness of different interventions, developing options to enhance implementation, and proposing practical mechanisms for oversight and compliance. Working alongside the Head of AI Safety program, you"ll lead research that informs how we govern increasingly capable AI systems.

Examples of past PAI work include:

Creating scalable guidelines that tailor safety practices for general-purpose AI systemsDeveloping evidence-based recommendations for synthetic media governance through case studies with industry partners

Future technical work will include:

Developing industry guidelines and technical frameworks for monitoring AI agents, considering tradeoffs between transparency, privacy, implementation costs, and user trustContributing to open problems in technical AI governance

You"ll collaborate with diverse stakeholders to advance consensus around governance approaches that work in practice, helping decision-makers in industry and government understand when and how to intervene in advanced AI development. The role can be performed remotely from anywhere in the US or Canada.

WHAT YOU'LL DO

Technical Governance Research

Lead research that connects technical analysis with policy needs, identifying technical challenges underlying AI/AGI safety discussions Propose governance interventions that could span different layers - from model safety to supply chain considerations to broader societal resilience measuresUse a multistakeholder organizations' tools - rigorous analysis, public and private communications, working groups, and convenings - to gather insights from Partners on AI development processes to ensure research outputs are practical and impactfulAuthor/co-author research papers, blogs, and op-eds with PAI staff and Partners, and share insights at leading AI conferences like NeurIPS, FAccT, and AIESProject Management and Stakeholder EngagementLead technical research workstreams with high autonomy, supporting the broader AI safety program strategyBuild and maintain strong relationships across PAI"s internal teams and Partner community to advance research objectivesRepresent PAI in key external forums, including technical working groups and research collaborationsStrategic Communication and ImpactTranslate complex technical findings into clear, actionable recommendations for AI safety institutes, policymakers, industry partners, and the publicSupport development of outreach strategies to increase adoption of PAI"s AI safety recommendations

ABOUT YOU

PhD or MA with three or more years of research or practical experience in a relevant field (e.g., computer science, machine learning, economics, science and technology studies, philosophy) Strong understanding of technical AI landscape and governance challenges, including safety considerations for advanced AI systemsDemonstrated ability to conduct rigorous technical governance research while considering broader policy and societal implicationsExcellent communication skills, with proven ability to translate complex technical concepts for different audiencesTrack record of building collaborative relationships and working effectively across diverse stakeholder groupsAdaptable and comfortable working in a dynamic, mission-driven organization

FOLLOWING COULD BE AN ADVANTAGE

Experience at frontier AI labs or tech companies (AI safety experience not required; we welcome those with ML, product, policy or engineering backgrounds) or government agencies working on AI-related areasSubject matter expertise from relevant areas such as: AI system Trust & Safety (e.g., developing monitoring systems, acceptable use policies, or safety metrics for large language models)Privacy-preserving machine learning and differential privacyCybersecurity, particularly vulnerability assessment and incident reporting

QUALITIES THAT ARE IMPORTANT TO US:

Builds Trust: Able to be transparent by being authentic, conveying trust and communicating openly while involving key stakeholders in decision-making.Visionary: Able to take a long-term perspective, conveying a belief in an outcome, and displaying the confidence to reach goalsInspirational: Able to inspire and motivate others in a positive mannerCourageous: Able to seek out opportunities for continuous improvement, and fearless in intervening in challenging situationsDecisive: Able to make informed decisions in a timely fashionPersonal Development: Able to seek opportunities for individual personal development

ADDITIONAL INFORMATION

We know that research has shown that some potential applicants will submit an application only when they feel that they have met close to all of the qualifications for a role—we encourage you to take a leap of faith with us and submit your application as long as you are passionate about working to make real impact on responsible AI. We are very interested in hearing from a diverse pool of candidates.PAI offers a generous paid leave and benefits package, currently including: Twenty vacation days; three personal reflection days; sick leave and family leave above industry standards; high-quality PPO and HMO health insurance plans, many 100% covered by PAI; Dental and vision insurance 100% covered by PAI; up to a 7% 401K match, vested immediately; pre-tax commuter benefits (Clipper via TriNet); automatic cell phone reimbursement ($75/month); up to $1,000 in professional development funds annually; $150 per month to access co-working space; regular team lunches & focused work days; opportunities to attend AI related conferences and events and to collaborate with our roughly 100 partners across industry, academia and civil society. Please refer to our careers page for an updated list of benefits.Must be eligible to work in the United States or Canada; we are unable to sponsor visas at this time.PAI is headquartered in San Francisco, with a global membership base and scope. This role is eligible for remote work within the United States and Canada with no requirement to be located in San Francisco.

PAI is proud to be an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment in all aspects of employment, including recruiting, hiring, promoting, training, education assistance, social and recreational programs, compensation, benefits, transfers, discipline, and all privileges and conditions of employment. Employment decisions at PAI are based on business needs, job requirements, and individual qualifications.

PAI will consider for employment qualified applicants with criminal histories, in a manner consistent with the San Francisco Fair Chance Ordinance or similar laws.

The Partnership on AI may become subject to certain governmental record keeping and reporting requirements for the administration of civil rights laws and regulations. We also track diversity in our workforce for the purpose of improving over time. In order to comply with these goals, the Partnership on AI invites employees to voluntarily self-identify their gender and race/ethnicity. Submission of this information is voluntary and refusal to provide it will not jeopardize or adversely affect employment or any consideration you may receive for employment or advancement. The information obtained will be kept confidential.

HOW TO APPLY

Resume and/or CVCover Letter reflecting on your motivation for the role and experiences illustrating fit 2-5 pages writing sample (please append to cover letter): this can be any writing for which you were primarily an author and does not have to be about any topic related to AI safety

Applications for this job posting will be accepted until 11:59pm PT January 20, 2025.

Job Location

San Francisco, California, 94114, United States

Frequently asked questions about this position

Similar Jobs In San Francisco, California

Principal Test Engineer

SiTime Corporation
Santa Clara, California

Manufacturing Test Engineer

Unigen
Newark, California

Lab Packaging Testing Engineer I

Westpak, Inc
Union City, California

Sr. Manufacturing Process Engineer

Foundation Robotics Labs Inc.
San Francisco, California

PCB Technician

Pronto
San Francisco, California

Apply NowYour application goes straight to the hiring team