Remote Senior AI Infra Engineer AI ML and Data Infrastructure
The Team\n\nAcross our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central team provides the support needed to push this work forward. \n\nThe Central team at CZI consists of our Finance, People & DEI, Real Estate, Events, Workplace, Facilities, Security, Brand & Communications, Business Systems, Central Operations, Strategic Initiatives, and Ventures teams. These teams provide strategic support and operational excellence across the board at CZI.\nThe Opportunity\n\nBy pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive solutions. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. Our technology team is already helping schools bring personalized learning tools to teachers and schools across the country. We are also supporting scientists around the world as they develop a comprehensive reference atlas of all cells in the human body, and are developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. \n\nThe AI/ML and Data Engineering Infrastructure organization works on building shared tools and platforms to be used across all of the Chan Zuckerberg Initiative, partnering and supporting the work of a wide range of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale. A person in this role will build these technology solutions and help to cultivate a culture of shared best practices and knowledge around core engineering.\nWhat You'll Do\n\n\n* Participate in the technical design and building of efficient, stable, performant, scalable and secure AI/ML and Data infrastructure engineering solutions.\n\n* Active hands-on coding working on our Deep Learning and Machine Learning models\n\n* Design and implement complex systems integrating with our large scale AI/ML GPU compute infrastructure and platform, making working across multiple clouds easier and convenient for our Research Engineers, ML Engineers, and Data Scientists. \n\n* Use your solid experience and skills in building containerized applications and infrastructure using Kubernetes in support of our large scale GPU Research cluster as well as working on our various heterogeneous and distributed AI/ML environments. \n\n* Collaborate with other team members in the design and build of our Cloud based AI/ML platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes.\n\n* Collaborate with our partners on data management solutions in our heterogeneous collection of complex datasets.\n\n* Help build tooling that makes optimal use of our shared infrastructure in empowering our AI/ML efforts with world class GPU Compute Cluster and other compute environments such as our AWS based services.\n\n\n\nWhat You'll Bring\n\n\n* BS or MS degree in Computer Science or a related technical discipline or equivalent experience\n\n* 5+ years of relevant coding experience\n\n* 3+ years of systems Architecture and Design experience, with a broad range of experience across Data, AI/ML, Core Infrastructure, and Security Engineering\n\n* Scaling containerized applications on Kubernetes or Mesos, including expertise with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes or Mesos. (Kubernetes preferred)\n\n* Proficiency with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, and experience with On-Prem and Colocation Service hosting environments\n\n* Proven coding ability with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala\n\n* Shown ability with a scripting language such as Python, PHP, or Ruby\n\n* AI/ML Platform Operations experience in an environment with challenging data and systems platform challenges - including large scale Kafka and Spark deployments (or their coralaries such as Pulsar, Flink, and/or Ray) as well as Workflow scheduling tools such as Apache Airflow, Dagster, or Apache Beam \n\n* MLOps experience working with medium to large scale GPU clusters in Kubernetes (Kubeflow), HPC environments, or large scale Cloud based ML deployments\n\n* Working knowledge of Nvidia CUDA and AI/ML custom libraries. \n\n* Knowledge of Linux systems optimization and administration\n\n* Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.\n\n* PyTorch, Karas, or Tensorflow experience a strong nice to have\n\n* HPC with and Slurm experience a strong nice to have\n\n\n\nCompensation\n\nThe Redwood City, CA base pay range for this role is $190,000 - $285,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside Redwood City are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.\nBenefits for the Whole You \n\nWeโre thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. \n\n\n* CZI provides a generous employer match on employee 401(k) contributions to support planning for the future.\n\n* Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs.\n\n* CZI Life of Service Gifts are awarded to employees to โlive the missionโ and support the causes closest to them.\n\n* Paid time off to volunteer at an organization of your choice. \n\n* Funding for select family-forming benefits. \n\n* Relocation support for employees who need assistance moving to the Bay Area\n\n* And more!\n\n\n\nCommitment to Diversity\n\nWe believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. Learn about our diversity, equity, and inclusion efforts. \n\nIf youโre interested in a role but your previous experience doesnโt perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.\n\nExplore our work modes, benefits, and interview process at www.chanzuckerberg.com/careers.\n\n#LI-Remote \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$42,500 — $82,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRedwood City, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nCompany Overview: \nOur client is a leading provider of advanced technology solutions, specializing in mission-critical applications that support national security initiatives. With a strong commitment to innovation and excellence, they empower their team members to deliver impactful solutions in a collaborative and supportive environment.\n\n\nPosition Overview: \nWe are seeking experienced Systems Engineers to join a platform engineering team in Chantilly, VA. As a Systems Engineer - Platform Engineer, you will collaborate with multiple project teams to design and produce reusable infrastructure and software components. You will play a crucial role in developing robust, scalable solutions using modern infrastructure as code and DevOps tools, aimed at enhancing application resilience, reducing costs, and accelerating delivery to mission customers.\n\n\nActive security clearance with full scope polygraph required.\n\n\n\nResponsibilities:\n* Design and implement reusable software solutions and business processes using industry-standard infrastructure as code (IaC) and DevOps tools.\n* Utilize cloud-native tools to build and maintain infrastructure supporting large, user-facing applications in AWS.\n* Develop and deploy containerized applications using Docker, Podman, Kubernetes, and Helm.\n* Debug and optimize applications written in Java, Python, JavaScript, or Ruby on Linux platforms.\n* Manage relational databases (e.g., PostgreSQL, MySQL) and document databases (e.g., Elasticsearch, Solr), including managed services like AWS RDS or OpenSearch.\n* Provide tier 3 support and perform root cause analysis for application and infrastructure failures.\n* Collaborate effectively with cross-functional teams in an agile environment using Git-based workflows.\n\n\n\nRequirements:\n* Bachelor's degree in Computer Science, Engineering, or related field, with 3+ years of experience for senior level, 8+ years for expert level, or 13+ years for subject matter expert level.\n* Proven experience leveraging cloud-native tools to build and sustain applications in AWS.\n* Strong proficiency in writing infrastructure as code using Terraform or similar tools.\n* Hands-on experience with containerization technologies such as Docker, Kubernetes, and Helm.\n* Proficiency in programming languages such as Python (preferred), Java, or JavaScript.\n* Experience managing relational databases (e.g., PostgreSQL, MySQL) and document databases (e.g., Elasticsearch, Solr).\n* Ability to provide tier 3 support and conduct root cause analysis for application and infrastructure issues.\n* Familiarity with Git and Agile methodologies for software development.\n* Active security clearance with full scope polygraph required.\n\n\n\nBenefits:\n* Comprehensive medical, dental, and vision plans with employer-paid premiums for employees and dependents.\n* Health Spending Accounts (HSA) with employer contributions to cover plan deductibles.\n* 100% employer match on 401(k) contributions up to 8% of annual payroll, immediately vested.\n* Generous annual paid time off (PTO), including holidays, vacation, sick, personal, and administrative absences.\n* Financial support for continuing education up to $10,000 annually.\n* Dependent Care Flexible Spending Account (FSA) for elder and child care expenses.\n\n\n\nSalary Range:\n* Senior Level: $155,000 - $180,000\n* Expert Level: $210,000 - $228,000\n* Subject Matter Expert Level: $250,000 - $265,000\n\n\n\n\n\nLocation & Details: Contractor site in Chantilly, VA. Flexible work hours with potential for off-hours support. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Git, Ruby, Senior and Engineer jobs that are similar:\n\n
$70,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nChantilly, VA
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Coalfire\nCoalfire is on a mission to make the world a safer place by solving our clientsโ toughest cybersecurity challenges. We work at the cutting edge of technology to advise, assess, automate, and ultimately help companies navigate the ever-changing cybersecurity landscape. We are headquartered in Denver, Colorado with offices across the U.S. and U.K., and we support clients around the world. \nBut thatโs not who we are โ thatโs just what we do. \n \nWe are thought leaders, consultants, and cybersecurity experts, but above all else, we are a team of passionate problem-solvers who are hungry to learn, grow, and make a difference. \n \nAnd weโre growing fast. \n \nWeโre looking for a Site Reliability Engineer I to support our Managed Services team. \n\n\nPosition Summary\nAs a Junior Site Reliability Engineer at Coalfire within our Managed Services (CMS) group, you will be a self-starter, passionate about cloud technology, and thrive on problem solving. You will work within major public clouds, utilizing automation and your technical abilities to operate the most cutting-edge offerings from Cloud Service Providers (CSPs). This role directly supports leading cloud software companies to provide seamless reliability and scalability of their SaaS product to the largest enterprises and government agencies around the world.\n \nThis can be a remote position (must be located in the United States).\n\n\n\nWhat You'll Do\n* Become a member of a highly collaborative engineering team offering a unique blend of Cloud Infrastructure Administration, Site Reliability Engineering, Security Operations, and Vulnerability Management across multiple clients.\n* Coordinate with client product teams, engineering team members, and other stakeholders to monitor and maintain a secure and resilient cloud-hosted infrastructure to established SLAs in both production and non-production environments.\n* Innovate and implement using automated orchestration and configuration management techniques. Understand the design, deployment, and management of secure and compliant enterprise servers, network infrastructure, boundary protection, and cloud architectures using Infrastructure-as-Code.\n* Create, maintain, and peer review automated orchestration and configuration management codebases, as well as Infrastructure-as-Code codebases. Maintain IaC tooling and versioning within Client environments.\n* Implement and upgrade client environments with CI/CD infrastructure code and provide internal feedback to development teams for environment requirements and necessary alterations. \n* Work across AWS, Azure and GCP, understanding and utilizing their unique native services in client environments.\n* Configure, tune, and troubleshoot cloud-based tools, manage cost, security, and compliance for the Clientโs environments.\n* Monitor and resolve site stability and performance issues related to functionality and availability.\n* Work closely with client DevOps and product teams to provide 24x7x365 support to environments through Client ticketing systems.\n* Support definition, testing, and validation of incident response and disaster recovery documentation and exercises.\n* Participate in on-call rotations as needed to support Client critical events, and operational needs that may lay outside of business hours.\n* Support testing and data reviews to collect and report on the effectiveness of current security and operational measures, in addition to remediating deviations from current security and operational measures.\n* Maintain detailed diagrams representative of the Clientโs cloud architecture.\n* Maintain, optimize, and peer review standard operating procedures, operational runbooks, technical documents, and troubleshooting guidelines\n\n\n\nWhat You'll Bring\n* BS or above in related Information Technology field or equivalent combination of education and experience\n* 2+ years experience in 24x7x365 production operations\n* ยทFundamental understanding of networking and networking troubleshooting.\n* 2+ years experience installing, managing, and troubleshooting Linux and/or Windows Server operating systems in a production environment.\n* 2+ years experience supporting cloud operations and automation in AWS, Azure or GCP (and aligned certifications)\n* 2+ years experience with Infrastructure-as-Code and orchestration/automation tools such as Terraform and Ansible\n* Experience with IaaS platform capabilities and services (cloud certifications expected)\n* Experience within ticketing tool solutions such as Jira and ServiceNow\n* Experience using environmental analytics tools such as Splunk and Elastic Stack for querying, monitoring and alerting\n* Experience in at least one primary scripting language (Bash, Python, PowerShell)\n* Excellent communication, organizational, and problem-solving skills in a dynamic environment\n* Effective documentation skills, to include technical diagrams and written descriptions\n* Ability to work as part of a team with professional attitude and demeanor\n\n\n\nBonus Points\n* Previous experience in a consulting role within dynamic, and fast-paced environments\n* Previous experience supporting a 24x7x365 highly available environment for a SaaS vendor\n* Experience supporting security and/or infrastructure incident handling and investigation, and/or system scenario re-creation\n* Experience working within container orchestration solutions such as Kubernetes, Docker, EKS and/or ECS\n* Experience working within an automated CI/CD pipeline for release development, testing, remediation, and deployment\n* Cloud-based networking experience (Palo Alto, Cisco ASAv, etc.โฆ)\n* Familiarity with frameworks such as FedRAMP, FISMA, SOC, ISO, HIPAA, HITRUST, PCI, etc.\n* Familiarity with configuration baseline standards such as CIS Benchmarks & DISA STIG\n* Knowledge of encryption technologies (SSL, encryption, PKI)\n* Experience with diagramming (Visio, Lucid Chart, etc.) \n* Application development experience for cloud-based systems\n\n\n\n\n\nWhy You'll Want to Join Us\n\n\nAt Coalfire, youโll find the support you need to thrive personally and professionally. In many cases, we provide a flexible work model that empowers you to choose when and where youโll work most effectively โ whether youโre at home or an office. \nRegardless of location, youโll experience a company that prioritizes connection and wellbeing and be part of a team where people care about each other and our communities. Youโll have opportunities to join employee resource groups, participate in in-person and virtual events, and more. And youโll enjoy competitive perks and benefits to support you and your family, like paid parental leave, flexible time off, certification and training reimbursement, digital mental health and wellbeing support membership, and comprehensive insurance options. \n\n\nAt Coalfire, equal opportunity and pay equity is integral to the way we do business. A reasonable estimate of the compensation range for this role is $95,000 to $110,000 based on national salary averages. The actual salary offer to the successful candidate will be based on job-related education, geographic location, training, licensure and certifications and other factors. You may also be eligible to participate in annual incentive, commission, and/or recognition programs.All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. \n \n#LI-REMOTE \n#LI-JB1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, DevOps, Cloud, Junior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nUnited States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nDo you want to work for a mission-driven non-profit, writing software that will contribute to helping the livelihoods of millions of coffee farmers around the world? Enveritas is a 501(c)3 non-profit and Y Combinator-backed startup looking to hire for our Engineering & Data Group. You can learn more about this job and about our Backend and Data Engineering Team at https://www.enveritas.org/jobs/backend-software-eng/\n\nWe are looking for two backend software engineers with a focus on python and PostgreSQL to join us on a remote/global, full-time basis. Our Backend and Data Engineering Team is a four-person team (soon to be six!) and is part of our Engineering & Data Group โ a quirky, talented, and humble group of about twenty with diverse backgrounds ranging from journalism to academia to international industry.\n\nAbout Our Backend & Data Engineering Team\n\nThe Backend & Data Engineering Team builds software to collect, analyze, and report data about coffee farmersโ conditions and practices. This large-scale data-collection effort requires many moving parts to work together, and we use technology to support that effort at every step of the process โ from identifying coffee farms in satellite imagery, to coordinating survey edits across country teams, to detecting data anomalies in real-time that can be investigated while teams are still in the field. A core part of our work is in data aggregation and report generation, with insights ultimately being shared with roasters and other stakeholders on how to assist in improving the social, economic, and environmental conditions of smallholder farmers. \n\nWhile our tooling varies across internal products, our backend services primarily use a Python/PostgreSQL stack running on Linux to run our GraphQL APIs. We use git and Github for maintaining our code, CircleCI for CI/CD, and AWS for hosting our services and static resources, with containerization where appropriate for development and deployment. We've begun working with Terraform.\n\nWhat Youโll Be Doing\n\nYou will contribute to major feature planning and development, both independently and in collaboration with your teammates.\n\n\n* Implement new features on our core platforms, Jebena and Sini. Youโll participate in long-term planning and product roadmaps, collaborate with product managers on writing specs for the team to implement, and develop features from specs. You should be comfortable collaborating with non-Engineering teams to understand their feature needs. A lionโs share of your time will be spent working with Python and PostgreSQL to add features to our internal platforms.\n\n* Maintenance and enhancements of existing code. Youโll work with other engineers to triage and resolve incoming issues (we use Sentry). Our team also reserves Fridays for bug-fixing, resolving technical debt, and discovering/relieving pain-points for our users.\n\n* Manage AWS services. In tandem with our Head of IT, a part of this role includes helping manage our AWS account, including reviewing our CI/CD setup and proposing ways to further automate and secure our setup, including expanding our usage of Terraform.\n\n\n\n\nQualifications\n\nRead this first: research shows that people of different backgrounds read job postings differently. If you donโt think you meet all of the qualifications but do think youโd be a great match for us, please consider applying and sharing more in your cover letter. Weโd love to talk with you to see what skills you can bring to our team. This said, we are most likely to be interested in your candidacy if you can demonstrate the majority of the qualifications listed below:\n\n\n* A degree in computer science, or equivalent training in the principles of software engineering.\n\n* Strong grasp of design patterns for building software that is well-encapsulated, performant, and elegant.\n\n* Multiple years of professional experience as a backend engineer in more than one team environment, including both developing engineering specs and writing code in Python.\n\n* Extensive experience with Python and PostgreSQL, and creating well-designed data models.\n\n* Background developing applications that provide HTTP-based APIs.\n\n* Familiarity with docker containers, AWS services (EC2, RDS, CloudFront), and CI/CD setups.\n\n* Excellent communication and analytical skills.\n\n\n\n\nWho You Are\n\nOur team is fully distributed, so you should be comfortable with remote work. This role is a full-time individual contributor role. While you can be located anywhere that our EOR (Deel) supports, our core hours are 10am to 2pm Eastern Time, Monday through Friday, with team members choosing either an early start or later stop as suits them.\n\nYou should be inspired by our mission to improve the lives of smallholder coffee farmers, and have an interest in sustainability. You should have a deep empathy for users of our tools and understand the importance of supporting the work of other teams. Because operational and business needs can be ambiguous and change on a short time-scale, you should have a love for environments with uncertainty, and enjoy not only solving problems, but discovering and demystifying them.\n\nWe are a small team! You should be comfortable working both independently and as a thoughtful collaborator, sensitive to the legibility and maintainability of your code when in the hands of your teammates.\n\nAbout Working With Us & Compensation\n\nEnveritas has teams around the world: we are about 100 people spread over almost two dozen countries, and of all backgrounds, faiths, and identities. To learn more about working at Enveritas, see https://www.enveritas.org/jobs/\n\nFor a US-Based hire, base salary for this position will be between $130,000 and $150,000 annually (paid semi-monthly). This is a full-time exempt position. Full benefits include 401k with matching contributions, Medical/Dental/Vision, and Flexible Spending Account (FSA), 4 weeks vacation in addition to 13 standard holidays, and personal/sick time.\n\nFor a hire outside the US, our offer will be competitive; the specific benefits and compensation details will vary as required to account for your regionโs laws and requirements. Salary for this position will be paid in relevant local currency.\n\nFor all staff, we are able to offer:\n\n\n* Annual education budget for conferences, books, and other professional development opportunities.\n\n* Annual all-company retreat.\n\n* Field visits to our Country Ops teams in coffee-growing countries such as Colombia, Costa Rica, Ethiopia, and Indonesia.\n\n\n\n\nInterview Process\n\nWe are committed to fair and equitable hiring. To honor this commitment, we are being transparent about our interview process. We are interested in learning what working with you would be like and believe the below is the fairest method for us to see you at your best โ and for you to learn about us! If you feel that a different method would be better for us to learn what working together would be like, please tell us in your application. \n\nAfter your introductory interview, we expect your interview process to take three to four weeks (but will depend on scheduling), and consist of four conversations that total about five hours of time. You should plan to also spend about four hours in total preparing for interviews. See the hiring page at https://www.enveritas.org/jobs/backend-software-eng/ for details about each of these interviews, including links to our interview prompts as available.\n\n\n* Introductory Interview (30 minutes; Google Meet; audio-only)\n\n* First Technical Interview (60 minutes; Google Meet)\n\n* Second Technical Interview (60-90 minutes; Google Meet)\n\n* Manager Interview (45-60 minutes; Google Meet)\n\n\n\n\nHow to Apply\n\nPlease apply using our Greenhouse application form. Feel free to contact us at [email protected] should you have any questions about the position or the interview process. Questions about this opportunity or process will not reflect negatively on your application.\n\nWe care deeply about diversity. Our work is complex and nuanced, so the more diversity we have in the voices working on our problems, the larger of an impact our work can have for the world. Enveritas is an Equal Opportunity Employer โencouraging an inclusive and diverse workforce. We embrace and celebrate the unique experiences, perspectives, and cultural backgrounds that each individual brings to the workplace. We are dedicated to hiring employees who reflect the communities we serve and strongly encourage qualified candidates from all backgrounds to apply.โ\n\nA few notes about our communications: We are not able to reply to messages sent to staff outside of either our application process or our jobs email address, as this is unfair to other candidates. Also, Enveritas has been made aware of fake job postings by individuals pretending to hire persons seeking employment. These individuals are looking to collect personal information about you for fraudulent purposes. All legitimate Enveritas job openings are posted under https://enveritas.org/jobs/ and all recruiting emails from Enveritas team members will come from @enveritas.org. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, GraphQL, Python, Docker, Education, Git, Engineer, Linux and Backend jobs that are similar:\n\n
$70,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nSummary\nAs a Senior Principal Software Engineer at Allegiant, you are a full-stack engineer and play a key role in the delivery of highly reliable, scalable, and maintainable systems. You will drive projects end-to-end, collaborating on product definitions with Product Managers, architecting and implementing technical solutions with talented teams, and ensuring continued success even after deployment. This position works closely with others to facilitate platform convergence and when necessary participate in a phased implementation of the new applications using standard development tools and methodologies. You will lead and participate in design reviews, architecture discussions and other technical leader activities. Youโre comfortable working independently as well as supporting other team members. Youโre pragmatic, tenacious, and comfortable with ambiguity. Youโll be able to balance technical leadership and acumen with strong business judgment to make the right decisions about technology choices. Youโll strive for simplicity, while bringing technical insights into how to refine and improve the system, ultimately ensuring performance, stability, and an exceptional end user experience.\n\n\nVisa Sponsorship Available\nNo\n\n\nMinimum Requirements\nCombination of Education and Experience will be considered. Must be authorized to work in the US as defined by the Immigration Act of 1986. Must pass a Criminal Background Check.\nEducation: Bachelorโs Degree in Computer Science, Math, MIS or related field, Master's Degree preferred.\nCertification: Java (8/9), JavaScript, PHP 7.x, C#, C==, Python, HTML5, CSS3, AJAX, etc.\nYears of Experience: Minimum twelve (12) years' experience of development experience as a seasoned middleware engineer required, airline and/or hospitality experience preferred. Minimum nine (9) years' experience of software development experience, architecture, and building multi-tiered, high-volume, fault tolerant, high-availability, and globally distributed systems in a Linux environment; E commerce experience is a plus.\n\n\nPreferred Requirements\nโขLanguages Experience: Java (8, 9), server-side JavaScript (under Node.js, Meteor), PHP 7.x, C#, C++, Python, HTML5, CSS3, AJAX, JavaScript, JQuery and latest frameworks (AngularJS, ReactJS/Redux, Backbone)\nโขTechnologies/Frameworks Experience:\nโขExperience of the following is required: JBoss/Wildfly server, Spring Boot 2.0, Tomcat, Linux, HTTP, SOAP/REST Web Services/Microservices, XML, JSON\nโขExperience with fault tolerant message queuing/brokering systems (e.g. AMQ, RabbitMQ, zeromq, Kafka)\nโขSolid engineering experiences working on EJBs and the web layer, Spring Framework, Maven\nโขExperience with Unit Testing Frameworks and Tools such as JUnit, TestNG, Mockito, Jasmine, Mocha, etc.\nโขExperience with CI/CD build servers (Jenkins, Bamboo, TravisCI, TeamCity, etc.)\nโขEnvironment deploy/orchestration (Kubernetes, Docker, Ansible, etc.) is highly desired\nโขExperience with AWS or similar cloud platforms (OpenStack, Azure, Google Cloud) is highly desired\nโขExperience/knowledge with service discovery solutions like Consul/Eureka/Zookeeper\nโขFamiliarity with Inversion of Control paradigm is highly desired; experience with Java-based IoC frameworks is a definite plus.\nโขExperience with ORM frameworks for Java, Node, PHP, or Mongo (e.g., Hibernate, ORM2, Sequelize, Doctrine, Mongoose, etc.) is required.\nโขExperience with monitoring tools (SumoLogic, Splunk, Logstash, ELK, DataDog, etc.) is a plus\nโขExperience with SQL and NoSQL databases, for example, DB2, MySQL, Mongo, Cassandra, etc.is required; experience with cloud-hosted variants (Cloudant, Dynamo, various RDS flavors) is highly desired.\nโขSignificant and demonstrable experience of implementing Java best practice โ especially around scalability, availability and performance.\nโขStrong and demonstrable experience working in design and development of public facing & private REST APIs.\nโขStrong and demonstrable experience working in teams with heavy emphasis on DevOps, Automation, and Quality.\nโขHighly developed design skills with strong experience in algorithms, data structures, OOD, and applied enterprise design patterns, and database design is required. Domain-driven design, and data modeling is required.\nโขTrack record of building and maintaining excellent working relationships with peers across organizations (QA, Development, PM, UX, etc.).\nโขExperience and understanding of software source control systems, preferably Git.\nโขExperience in making trade off decisions regarding the architecture and design of software systems.\nโขExperience of using Unix/Linux-based OS including performing basic administrative tasks is a plus.\nโขTrack record of delivering excellent customer experiences.\nโขFamiliarity with Agile and Scrum methodologies.\nโขStay abreast of new technologies and methods to building high quality software (conferences, meetups, etc.).\nโขExcellent analytical thinking, problem solving, communication, organization and interpersonal skills; able to simplify complex problems, processes or projects into component parts explore and evaluate them systematically.\nโขIndependent thinker with creative, resourceful and proactive problem-solving skills working with a close knit development team that offers full ownership of projects in a supportive design environment.\nโขExcellent written and verbal communication skills required. Must have the ability to communicate ideas effectively and cross functionally; exhibit creativity, flexibility, adaptability and the drive to achieve results; capacity to work independently and as a team player.\nโขProficient in Microsoft Office Products: Word, Excel and Outlook.\n\n\nJob Duties\nโขLead a software project from requirements analysis till deployment, having complete responsibility of all the technical deliverables through the life cycle (requirements analysis, design, implementation, QA support and deployment) of the project with no supervision.\nโขWork with other teams such as QA, PMO and IT Operations and provide them technical support and guidance to ensure successful delivery of a software project.\nโขPossess expert knowledge in performance, scalability, enterprise system architecture, and engineering best practices.\nโขResolve application performance and scalability issues by identifying bottlenecks, resource utilization and key areas of improvement.\nโขFunctionally decompose complex problems into simple, straightforward solutions.\nโขMember of architecture team that is responsible for framework evaluation, recommendation and plan integration; modeling process, developing re-usable components and design an n-tier system and scalable architecture.\nโขProvide solution architecture for business problems while balancing essential technical guidelines.\nโขRe-factor current application implementation to enhance the application and align with technology roadmap.\nโขDrive innovation by contributing new ideas for our processes, tools, and technologies.\nโขDesign and implement product enhancements based on business priorities.\nโขExert technical influence over multiple teams, increasing their productivity and effectiveness by sharing your deep knowledge and experience.\nโขDesign and develop domain data models and database schema to support business requirements.\nโขWork with the business analysts to gather and analyze requirements; develop high-level system narratives, storyboards and UI prototypes.\nโขKeep up with the latest developments in the front-end and middleware framework/community\nโขConduct design and code reviews and contribute, adhere to, and enforce standards and best practices in software development.\nโขDevelop prototypes or demos for any strategic business initiative.\nโขAssist in the career development of others, actively mentoring individuals on advanced technical issues and helping managers guide the career growth of their team member.\nโขDevelop complex database interactions and optimizations using ORM-driven SQL and native ad-hoc queries.\nโขEnsure any direct reports understand and apply our Customer Commitment and customer service standards to their daily responsibilities, as appropriate.\nโขModel Allegiantโs customer service standards in personal actions and when providing leadership direction.\n\n\nPhysical Requirements\nThe Physical Demands and Work Environment described here are a representative of those that must be met by a Team Member to successfully perform the essential functions of the role. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions of the role.\n\n\nOffice - While performing the duties of this job, the Team Member is regularly required to stand, sit, talk, hear, see, reach, stoop, kneel, and use hands and fingers to operate a computer, key board, printer, and phone. May be required to lift, push, pull, or carry up to 20 lbs. May be required to work various shifts/days in a 24-hour situation. Regular attendance is a requirement of the role. Exposure to moderate noise (i.e. business office with computers, phones, printers, and foot traffic), temperature and light fluctuations. Ability to work in a confined area as well as the ability to sit at a computer terminal for an extended period of time. Some travel may be a requirement of the role.\n\n\nEssential Services Provider\nAllegiant as a national air carrier is deemed an essential service provider during declared national and state emergencies. Team Members will be required to report to their assigned trip or work location during national and state emergencies unless prohibited by local, state or federal order.\n\n\nEEO Statement\nEqual Opportunity Employer: Disability/Veteran\nFor more information, see https://allegiantair.jobs\nPeople of color, women, LGBTQIA+, immigrants, veterans and persons with disabilities are encouraged to apply.170700\n\n\n\n\n$170,700 - $200,500 a year\n\nFull Time Benefits:\nProfit Sharing\nMedical/Dental/Vision/Life/ Disability Insurance\nMedical Travel Reimbursement\nLegal, Identity and Pet Insurance\n401K with an employer match\nEmployee Stock Purchase Plan\nEmployee Assistance Program\nTuition Reimbursement\nFlight Benefits\nPaid vacation, holidays, and sick time\n \nPart Time Benefits:\nProfit Sharing\nMedical Travel Reimbursement\nLegal, Identity and Pet Insurance\n401K with an employer match\nEmployee Stock Purchase Plan\nEmployee Assistance Program\nTuition Reimbursement\nFlight Benefits\nSick time \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Testing, JavaScript, Education, Java, Cloud, PHP, NoSQL, Microsoft, Senior, Engineer and Linux jobs that are similar:\n\n
$57,500 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLas Vegas, NV
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Chan Zuckerberg Biohub - San Francisco is hiring a
Remote AI ML HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub Network has an immediate opening for an AI/ML High Performance Computing (HPC) Principal Engineer. The CZ Biohub Network is composed of several new institutes that the Chan Zuckerberg Initiative created to do great science that cannot be done in conventional environments. The CZ Biohub Network brings together researchers from across disciplines to pursue audacious, important scientific challenges. The Network consists of four institutes throughout the country; San Francisco, Silicon Valley, Chicago and New York City. Each institute closely collaborates with the major universities in its local area. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports several 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the country, with the mission of understanding the mysteries of the cell and how cells interact within systems.\n\nThe Biohub is expanding its global scientific leadership, particularly in the area of AI/ML, with the acquisition of the largest GPU cluster dedicated to AI for biology. The AI/ML HPC Principal Engineer will be tasked with helping to realize the full potential of this capability in addition to providing advanced computing capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other computing needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ\n\n\n* Bachelorโs Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production compute, interconnect, storage hardware, software systems, storage subsystems\n\n* Configuring and administering parallel, network attached storage (Lustre, GPFS on ESS, NFS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, VAST, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.) and implementing fairshare, node sharing, backfill etc.. for compute and GPUs\n\n* Red Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* OpenACC, nvhpc, understanding of cuda driver compatibility issues\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software (Modules, SPACK)\n\n* Familiarity with source control tools (Git or SVN)\n\n* Experience with supporting use of popular ML frameworks such as Pytorch, Tensorflow\n\n* Familiarity with cybersecurity tools, methodologies, and best practices for protecting systems used for science\n\n* Experience with movement, storage, backup and archive of large scale data\n\n\n\n\nNice to have - \n\n\n* An advanced degree is strongly desired\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโs skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Node, Engineer and Linux jobs that are similar:\n\n
$57,500 — $85,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Chan Zuckerberg Biohub - San Francisco is hiring a
Remote HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub has an immediate opening for a High Performance Computing (HPC) Principal Engineer. The CZ Biohub is a one-of-a-kind independent non-profit research institute that brings together three leading universities - Stanford, UC Berkeley, and UC San Francisco - into a single collaborative technology and discovery engine. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports over 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the Bay Area, with the mission of understanding the underlying mechanisms of disease through the development of tools and technologies and the application to therapeutics and diagnostics.\n\nThis position will be tasked with strengthening and expanding the scientific computational capacity to further the Biohubโs expanding global scientific leadership. The HPC Principal Engineer will also provide IT capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other IT needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ\n\n\n* Bachelorโs Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable. An advanced degree is strongly desired.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively to implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production storage hardware, software systems, storage networks and systems\n\n* Configuring and administering parallel, network attached storage (Lustre, NFS, ESS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.)\nRed Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software\n\n* Familiarity with source control tools (Git or SVN)\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* Principal Engineer = $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโs skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Engineer and Linux jobs that are similar:\n\n
$50,000 — $85,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.