Research Compute Platform Engineer at Lasso Informatics Inc Lasso Informatique Inc
Explore Related Opportunities
About This Position
Research Compute Platform Engineer
Development / Engineering
Location: Remote, North America
About Lasso
Lasso Informatics is a SaaS start-up with a live research data management and analysis platform that brings together multi-modal (imaging, genetics, behavioral, and biosample) data for large-scale studies. Thousands of researchers across the globe rely on our platform today, and we’re rapidly iterating and improving to push the boundaries of what’s possible in research data management.
We live to innovate, and empower scientists to focus on the science, not the technology, leading to a faster time to science, and cure.
Our team is incredibly diverse both by background and expertise, and that is not by accident, we believe that the most creative and powerful solutions come from different ways of thinking about the world. You will be working in an inspiring ecosystem alongside world-renowned professionals in medicine, physics, engineering, imaging, epidemiology, software development and genetics. We thrive on empowering our colleagues to be thought leaders and innovate fresh new solutions for an exciting and rapidly changing field.
About the RoleWe are looking for a Research Compute Platform Engineer to design and build the next generation of Secure Compute Environments (SAFE) across AWS and GCP. This role focuses on developing new platform capabilities, establishing scalable and secure patterns, and enabling smooth handoff to SysOps for production rollout and ongoing operations.
This is a platform design and implementation role: you will define how systems should work, ensure they scale across multiple environments, and package them for reliable operation by others.
ResponsibilitiesPlatform Design & ImplementationDesign and implement new platform capabilities for secure research environments (compute, storage, access, tooling)
Build reusable reference architectures and standardized patterns for SAFE deployments
Develop infrastructure-as-code (e.g., Terraform) to enable consistent and repeatable environments
Build and support platforms for RStudio, Jupyter, and MATLAB in secure, multi-user environments
Enable reproducible workflows using tools such as DataLad and Apptainer (Singularity)
Support machine learning and data-intensive workloads, including GPU-enabled environments
Design multi-tenant, multi-environment systems with clear isolation boundaries
Define cloud resource organization strategies (AWS accounts, GCP projects/folders)
Ensure systems scale across teams, projects, and data sensitivity levels
Optimize compute environments for CPU, GPU, memory, and disk I/O performance
Design efficient storage and data access patterns (object storage, buckets, high-throughput file systems)
Identify and resolve bottlenecks across compute, storage, and networking layers
Define and implement tagging/labeling strategies for cost attribution and governance
Establish billing visibility and usage tracking across environments
Implement guardrails for budget control, quotas, and cost optimization
Translate security and compliance requirements into enforceable infrastructure patterns
Implement access controls, audit logging, and data governance mechanisms
Ensure environments meet regulatory and organizational requirements
Produce clear documentation, runbooks, and implementation guides
Ensure solutions are operationally sound, automatable, and maintainable
Partner closely with SysOps to support rollout and ongoing operations
Iterate on designs based on operational feedback
Strong experience with AWS and/or GCP (compute, storage, networking)
Experience designing scalable, multi-environment or multi-tenant systems
Hands-on experience with Linux system administration
Experience with Infrastructure as Code (Terraform, Pulumi, or similar)
Familiarity with:
RStudio, Jupyter, and/or MATLAB in shared environments
Containers (Apptainer/Singularity, Docker)
Data versioning or reproducibility tools (e.g., DataLad)
Understanding of:
Disk I/O and storage performance
Object storage (S3/GCS) and bucket design
GPU selection and workload optimization
Experience supporting machine learning or data-intensive workloads
Ability to design systems that balance security, usability, performance, and cost
Experience with HPC environments or research clusters
Familiarity with schedulers (Slurm, Kubernetes)
Experience with secure data enclaves or clean room environments
Knowledge of compliance frameworks (HIPAA, SOC2, ISO 27001)
Experience with policy-as-code or security automation
Strong systems thinker with attention to scalability and standardization
Able to move from prototype to production-ready design
Designs with operational handoff in mind
Comfortable working across infrastructure, security, and research domains
Clear communicator who can document and transfer knowledge effectively
Scan to Apply
Job Location
Job Location
This job is located in the United States region.