AI Risk Management & Evaluation SME at Jobgether – United States
Explore Related Opportunities
About This Position
This position is posted by Jobgether on behalf of a partner company. We are currently looking for an AI Risk Management & Evaluation SME in United States.
This role sits at the intersection of artificial intelligence governance, risk management, and technical evaluation frameworks, with a strong focus on the NIST AI Risk Management Framework (AI RMF). You will contribute to building structured approaches that ensure AI systems are designed, evaluated, and operated responsibly across their lifecycle. The position involves developing governance artifacts, risk mapping methodologies, and evaluation planning resources that translate complex standards into practical implementation tools. You will support organizations in aligning AI systems with “Govern, Map, and Manage” functions while contributing to testing, evaluation, verification, and validation (TEVV) efforts. Working in a highly collaborative and standards-driven environment, you will help ensure AI systems are transparent, accountable, and resilient. This is a remote role that may include occasional travel and offers the opportunity to shape responsible AI practices at scale.
In this role, you will support the development and operationalization of AI governance, risk, and evaluation frameworks aligned to the NIST AI RMF, ensuring structured implementation across technical and organizational environments.
- Draft AI governance artifacts aligned with the “Govern” function, including oversight models, decision pathways, and accountability structures for AI systems.
- Develop contextual risk mapping frameworks aligned with the “Map” function, identifying system context, risk factors, and evaluation readiness criteria.
- Create TEVV planning and synthesis materials that define evaluation objectives, scope, traceability, and alignment with AI RMF standards.
- Build integration artifacts connecting evaluation outcomes to operational risk management under the “Manage” function.
- Document feedback loops between evaluation results and risk mitigation actions to support continuous improvement.
- Support coordination of evaluation exercises, synthesis reporting, and governance integration documentation as required.
This position requires deep experience in AI governance, risk management frameworks, and evaluation methodologies, particularly within regulated or standards-driven environments.
- 8+ years of relevant professional experience in AI evaluation, assurance, risk management, or governance framework development.
- 5+ years of hands-on experience supporting or operationalizing the NIST AI RMF or comparable AI governance frameworks.
- Strong understanding of AI system evaluation approaches, measurement strategies, and TEVV principles.
- Experience developing structured governance documentation, risk models, and technical policy artifacts.
- Familiarity with AI system lifecycle concepts, including deployment, monitoring, and continuous risk assessment.
- Strong analytical, documentation, and communication skills with the ability to translate standards into practical implementation guidance.
- Ability to work independently in complex, standards-oriented, and cross-functional environments.
- Health, dental, and vision insurance coverage
- Short-term and long-term disability insurance
- Life insurance coverage
- 401(k) retirement plan
- Paid time off and holiday pay
- Opportunities for professional development and career growth
- Recognition and employee rewards programs
- Flexible remote work environment
- Additional benefits may be available depending on role and eligibility