Yahoo Mail is the ultimate consumer inbox with hundreds of millions of users. It’s the best way to access your email and stay organized from a computer, phone or tablet. With its beautiful design and lightning fast speed, Yahoo Mail makes reading, organizing, and sending emails easier than ever. A Little About Us: The Mail engineering team builds industry-leading Mail platforms and integrates with almost every Yahoo product in the company. We build and maintain platforms at scale and ensure that Yahoo Mail is always able to serve its hundreds of millions of users and billions of daily transactions. In the production engineering team, we use a combination of open-source software and develop our internal tools. We engineer automated solutions for daily operations, auto-scaling, auto-remediation, failure detection/predictions, and performing frequent changes while running thousands of servers and nodes non-interrupted. We encourage new ideas and continuously experiment and evaluate new technologies to assimilate them into our infrastructure. Our team structure encourages trust, learning from one another, having fun, and attracting people who are passionate about what they do. A Lot About You: In this role, you will closely collaborate with architects and software developers to facilitate the delivery of our services to an extensive user base of Yahoo Mail users. As an essential team member, you will collaborate with cross-functional teams to design, implement, and manage Yahoo Mail cloud infrastructure and migrate large-scale on-prem applications to the Google Cloud Platform (GCP). Your main responsibilities will encompass the administration and maintenance of cloud infrastructure, including on-call support, monitoring, security best practices, automation, deployment, the establishment of CI/CD pipelines, and the formulation of reusable cloud infrastructure templates via infrastructure as code (IaC) methodologies. Qualifications: Bachelor's or Master's degree in Computer Science or related field; Or, equivalent experience. 5+ years of experience in supporting Site Reliability, DevOps, Infrastructure, and handling on-call duties within Cloud environments, with a preference for experience in Google Cloud Platform (GCP). Proficiency in managing GCP services, including Kubernetes Engine (GKE), Compute Engine (GCE), Networking, Security, CI/CD pipelines, and other common Cloud technologies. Proven experience working with very large-scale applications, computer systems, and networks. Proficiency in Linux, TCP/IP, HTTP, MAIL protocols, DNS, content delivery networks (CDN), load balancers, and troubleshooting techniques. Proficiency in constructing Infrastructure as Code (IAC) using tools like Terraform, Ansible, and Helm Charts. Proficiency in programming using Python, Golang, or Java, along with the ability to build CI/CD pipelines for deploying services in these languages. Nice to have: Cloud databases and storage platforms, such as Storage Services (GCS), Cloud SQL, Spanner, and Firestore. Experience building transaction processing systems (OLTP) and/or larger scale analytics (OLAP) Machine Learning and AI platforms, such as Vertex AI, Generative AI, BigQuery, Looker and DataProc Cloud observability , Open telemetry You have proven track records in migrating on-premise infrastructure to GCP or any cloud infrastructure. You have years of operational experience in both on-premise and cloud environments. You are able to build the cloud services, including GKE and Spanner, from scratch using IaC (Terraform) and various build and CICD tools. You are a collaborative team player and have a continuous improvement mindset. You have good conversation and presentation skills to explain architecture and design of cloud technologies to senior engineering architects. You know how to set sprint goals to be specific, measurable, achievable, realistic and know the process of backlog refining, prioritizing backlog items and being an active participant in sprint planning sessions. #LI-AC1 The material job duties and responsibilities of this role include those listed above as well as adhering to Yahoo policies; exercising sound judgment; working effectively, safely and inclusively with others; exhibiting trustworthiness and meeting expectations; and safeguarding business operations and brand integrity. At Yahoo, we offer flexible hybrid work options that our employees love! While most roles don’t require regular office attendance, you may occasionally be asked to attend in-person events or team sessions. You’ll always get notice to make arrangements. Your recruiter will let you know if a specific job requires regular attendance at a Yahoo office or facility. If you have any questions about how this applies to the role, just ask the recruiter! Yahoo is proud to be an equal opportunity workplace. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. Yahoo will consider for employment qualified applicants with criminal histories in a manner consistent with applicable law. Yahoo is dedicated to providing an accessible environment for all candidates during the application process and for employees during their employment. If you need accessibility assistance and/or a reasonable accommodation due to a disability, please submit a request via the Accommodation Request Form (www.yahooinc.com/careers/contact-us.html) or call +1.866.772.3182. Requests and calls received for non-disability related issues, such as following up on an application, will not receive a response. We believe that a diverse and inclusive workplace strengthens Yahoo and deepens our relationships. When you support everyone to be their best selves, they spark discovery, innovation and creativity. Among other efforts, our 11 employee resource groups (ERGs) enhance a culture of belonging with programs, events and fellowship that help educate, support and create a workplace where all feel welcome. The compensation for this position ranges from $128,250.00 - $266,875.00/yr and will vary depending on factors such as your location, skills and experience.The compensation package may also include incentive compensation opportunities in the form of discretionary annual bonus or commissions. Our comprehensive benefits include healthcare, a great 401k, backup childcare, education stipends and much (much) more. Currently work for Yahoo? Please apply on our internal career site. Yahoo serves as a trusted guide for hundreds of millions of people globally, helping them achieve their goals online through our portfolio of iconic products. For advertisers, Yahoo Advertising offers omnichannel solutions and powerful data to engage with our brands and deliver results. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Cloud, Senior and Engineer jobs that are similar:\n\n
$67,500 — $127,500/year\n
\n\n#Benefits\n
💰 401(k)\n\n🌎 Distributed team\n\n⏰ Async\n\n🤓 Vision insurance\n\n🦷 Dental insurance\n\n🚑 Medical insurance\n\n🏖 Unlimited vacation\n\n🏖 Paid time off\n\n📆 4 day workweek\n\n💰 401k matching\n\n🏔 Company retreats\n\n🏬 Coworking budget\n\n📚 Learning budget\n\n💪 Free gym membership\n\n🧘 Mental wellness budget\n\n🖥 Home office budget\n\n🥧 Pay in crypto\n\n🥸 Pseudonymous\n\n💰 Profit sharing\n\n💰 Equity compensation\n\n⬜️ No whiteboard interview\n\n👀 No monitoring system\n\n🚫 No politics at work\n\n🎅 We hire old (and young)\n\n
\n\n#Location\nUS - United States of America
👉 Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\n*** English Version Below ***\n\n\nLes voyages vont bien au-delà de leur destination ; ils sont tissés de chaque souvenir que l'on crée en chemin. Notre engagement consiste à redéfinir l'avenir du voyage en collaborant avec plus de 200 compagnies aériennes, établissements hôteliers, sociétés de croisières, réseaux ferroviaires pour voyageurs et services financiers, dans le but de créer de nouvelles sources de revenus significatives grâce à des expériences client exceptionnelles. Fondés sur nos valeurs fondamentales d'ambition, d'innovation et de collaboration, nous sommes constamment poussés à repousser les limites, à surpasser les attentes et à exploiter le meilleur de chacun. Nous favorisons une culture qui repose sur la conviction que notre force réside dans notre unité, travaillant ensemble pour bâtir un avenir extraordinaire dans l'univers du voyage. Joignez-vous à nous pour transformer les voyages quotidiens en expériences véritablement extraordinaires.\n\n\nÀ PROPOS DU POSTE:\nNous recherchons un développeur de données pour rejoindre notre équipe d'ingénierie des données pour un poste permanent. Nous sommes une organisation de premier plan basée sur le web qui façonne continuellement la manière dont les consommateurs interagissent avec leurs programmes de fidélité. Nous travaillons avec les plus grandes compagnies aériennes, hôtels, institutions financières et programmes de fidélité de détail au monde, pour relever des défis complexes et trouver des solutions innovantes de commerce électronique ; l'équipe d'ingénierie des données joue un rôle essentiel à cet égard. Si vous souhaitez en faire partie, nous serions ravis d'avoir de vos nouvelles.\n\n\nSous la responsabilité du chef d'équipe, Ingénierie des données, vous allez :\n\n\n-Travailler dans une équipe basée sur la méthode Scrum qui est passionnée par la promotion d'une culture des données au sein de l'organisation.\n-Concevoir et développer des pipelines évolutifs et robustes pour la consommation de données par les applications en aval afin de soutenir les analyses avancées, les produits IA/ML et l'interopérabilité des systèmes.\n-Améliorer les mécanismes de traitement des données existants grâce à des tests et un suivi automatisés, afin d'améliorer continuellement l'intégrité et l'exactitude des données.\n-Participer activement à la conception de solutions et à la modélisation pour garantir que les produits de données sont développés conformément aux meilleures pratiques, aux normes et aux principes architecturaux.\n-Favoriser la collaboration et l'innovation afin de fournir des produits de données autonomes de haute qualité.\n-Soutenir les systèmes de production pour garantir un haut degré de disponibilité, de cohérence et d'exactitude des données.\n\n\nVOUS ÊTES UNE PERSONNE AVEC:\n-Une excellente expérience pratique dans le travail avec des ensembles de données SQL et NoSQL.\n-Une maîtrise de la mise en œuvre et du support des fonctionnalités et des fonctions de Snowflake.\n-Une expérience pratique de l'utilisation d'outils ETL tels que Talend et d'outils d'orchestration tels que Airflow.\n-Une expérience pratique de la mise en œuvre de services d'architecture analytique événementielle ou temps réel.\n-Une maîtrise du développement de produits de données en utilisant Python, SQL et Java.\n-Une compréhension des concepts modernes d'entrepôt de données et de modélisation des données, tels que Datalakes, flux, etc.\n-Une expérience pratique dans le développement de produits et de solutions BI en utilisant des plateformes telles que Tableau.\n-Une connaissance pratique des principes de DevOps tels que CI/CD.\n-Une connaissance pratique de l'informatique en nuage, en particulier des services AWS.\n-Discipliné, désireux d'aider, et surtout assoiffé d'apprentissage continu.\n-Axé sur le consommateur de données, constamment motivé à dépasser les besoins en données et en informations des parties prenantes.\n-Un communicateur et collaborateur efficace, au sein de l'équipe immédiate ainsi qu'avec d'autres unités organisationnelles.\n\n\nCE SERAIT UN PLUS:\n-Une solide connaissance des principes et pratiques généraux de l'ingénierie logicielle.\n-De l'expérience dans l'intégration de services, tels que Dataiku et NetSuite.\n-Une expérience pratique avec les services AWS.\n-De l'expérience avec les conteneurs et l'infrastructure associée, tels que Docker et Kubernetes.\n-De l'expérience dans le développement de produits de données en utilisant des outils de visualisation / tableau de bord comme Tableau.\n-De l'expérience avec les API RESTful.\n\n\nNOTRE STACK TECHNOLOGIQUE:\nVertica, PostgreSQL, CouchDB\nSnowflake\nServices de données Cloud AWS\nAirflow\nTalend\nGitLab\nTableau\nDocker/Kubernetes\nKafka-\n\n\nCE QUE VOUS AIMEREZ CHEZ NOUS:\n🏦 REER de contrepartie\n🏥 Plans de santé complets\n📅 Programme de congés payés flexible\n✈️ Allocation d’expérience de voyage\n🧘 Crédit annuel pour bien-être\n🥗 Événements d'équipe et déjeuners mensuels\n💻 Allocation pour des fournitures de bureau / transport\n🌅 Programme de travail à distance\n🍼 Programme de prime de congé parental\n🌍 Passeport pour l'aventure\n\n\nNOTRE PROCESSUS:\nPlusgrade est un employeur offrant des chances égales et s'engage à fournir un processus de recrutement accessible. Nous accueillons les candidatures de toutes les personnes qualifiées et nous sommes engagés à offrir des opportunités d'emploi égales, quel que soit l'identité ou l'expression de genre, la race, l'origine ethnique, la croyance, le lieu d'origine, l'âge, le sexe, l'état civil, le handicap physique ou mental, l'orientation sexuelle et toute autre catégorie protégée par la loi. Sur demande, nous fournirons un hébergement pour les candidats handicapés.\n\n\nNous croyons en la diversité et l'inclusivité, c'est pourquoi notre processus d'entrevue est conçu pour offrir une expérience de candidat positive et garantir que chaque candidat est évalué de manière égale. Toutes les candidatures seront examinées par notre équipe de talents et le ou les candidats retenus passeront par le processus de recrutement suivant:\n\n\n• Entrevue téléphonique avec le recruteur\n• Entrevue avec le responsable de l'embauche\n• Test pratique à domicile ou exercice de codage à distance (le cas échéant)\n• Entrevue en équipe\n\n\nTous les candidats recevront des commentaires, qu'ils réussissent ou non toutes les étapes de notre processus d'entrevue. Toutes vos informations seront confidentielles.\n\n\n\n\n\n\n\nTravel is not just about the destination; it's about every memory made along the way. We are dedicated to shaping the future of travel by partnering with 200+ airline, hospitality, cruise, passenger rail, and financial services companies to create new, meaningful revenue streams through incredible customer experiences. Rooted in our core values of being ambitious, innovative, and collaborative, we are driven to continuously raise the bar, exceed expectations, and bring out the best in everyone, fostering a culture where we believe we are better together, working towards an extraordinary future in travel. Come help us transform everyday travel into extraordinary experiences.\n\n\nABOUT THE ROLE:\nWe are looking for a Data Engineer to join our Data Engineering team for a permanent position.\nWe’re an industry-leading web-based organization that is continuously reshaping how consumers interact with their loyalty programs. We work with the world’s largest airline, hotel, financial, and retail rewards programs, to tackle complex challenges and come up with innovative e-commerce solutions; with the Data Engineering team playing a critical role in. If you’d like to be a part of it, we’d love to hear from you.\n\n\nReporting to the Team Lead, Data Engineering, you will:\n-Work in a scrum-based team that is passionate about enabling a data culture throughout the organization.\n-Design and develop scalable and robust pipelines for data consumption by downstream applications in support of advanced analytics, AI/ML products, and system interoperability. \n-Improve upon existing data processing mechanisms through automated testing and monitoring, to continually enhance data integrity and accuracy.\n-Actively participate in solution design and modeling to ensure data products are developed according to best practices, standards, and architectural principles.\n-Foster collaboration and innovation in order to deliver high quality autonomous data products.\n-Support production systems to deliver a high degree of data availability, consistency, and accuracy.\n\n\n\n\nYOU ARE SOMEONE WITH:\n-Excellent hands-on experience in working with SQL and NoSQL data sets.\n-Proficiency in implementing and supporting Snowflake features and functions.\n-Hands-on experience using ETL tools such as Talend and orchestration tools such as Airflow.\n-Hands-on experience with implementing event-driven or real-time analytics architecture services.\n-Proficiency in developing data products using python, SQL, and Java.\n-Understanding of modern data warehouse concepts and data modeling, i.e. Datalakes, streams, etc.\n-Hands-on experience in developing BI products and solutions using platforms such as Tableau.\n-Working knowledge of DevOps principles such as CI/CD.\n-Working knowledge of Cloud Computing, especially AWS services.\n-Self-disciplined, eager to help, and most importantly a thirst for continual learning.\n-Data consumer focused, constantly driven to exceed stakeholder data and information needs.\n-Effective communicator and collaborator, within the immediate team as well as across other organizational units.\n\n\n\n\nNICE TO HAVES:\n-Strong knowledge of general software engineering principles and practices.\n-Experience integrating with services, such as Dataiku and NetSuite.\n-Hands-on experience with AWS services.\n-Experience with containers and related infrastructure, such as Docker and Kubernetes.\n-Experience with developing data products using data visualization / dashboarding tools like Tableau.\n-Experience with RESTful APIs.\n\n\n\n\nOUR TECH STACK:\nVertica, PostgreSQL, CouchDB\nSnowflake\nAWS Cloud Data Services\nAirflow\nTalend\nGitLab\nTableau\nDocker/Kubernetes\nKafka\n\n\n\n\n\n\nWHAT YOU’LL LOVE ABOUT US:\n🏦 RRSP/401(k) Matching \n🏥 Comprehensive Health Plans \n📅 Flexible Paid Time Off \n✈️ Travel Experience Credit \n🧘 Annual Wellness Credit \n🥗 Team Events and Monthly Lunches\n💻 Home Office/Commuter Credit \n🌅 Work From Anywhere Program \n🍼 Parental Leave Top Up \n🌍 Adventure Pass\n\n\nOUR PROCESS:\nPlusgrade is an equal-opportunity employer and is committed to providing an accessible recruitment process. We welcome applications from all qualified individuals and are committed to equal employment opportunities regardless of gender identity or expression, race, ethnic origin, creed, place of origin, age, sex, marital status, physical or mental disability, sexual orientation, and any other category protected by law. Upon request, we will provide accommodation for applicants with disabilities.\n\n\nWe believe in diversity and inclusivity and that is why our interview process is designed for a positive candidate experience and to ensure every candidate is evaluated equally. All applications will be reviewed by our Talent Team and the successful candidate(s) will go through the following recruitment process:\n\n\n• Recruiter Phone Interview \n• Hiring Manager Interview\n• Take-home Assessment or remote coding exercise (if applicable)\n• Team Interview \n\n\nAll candidates will be provided with feedback regardless if they pass or didn’t pass any of our interview stages. All your information will be kept confidential.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Docker, Recruiter, Travel, DevOps, Cloud, NoSQL, API and Engineer jobs that are similar:\n\n
$80,000 — $125,000/year\n
\n\n#Benefits\n
💰 401(k)\n\n🌎 Distributed team\n\n⏰ Async\n\n🤓 Vision insurance\n\n🦷 Dental insurance\n\n🚑 Medical insurance\n\n🏖 Unlimited vacation\n\n🏖 Paid time off\n\n📆 4 day workweek\n\n💰 401k matching\n\n🏔 Company retreats\n\n🏬 Coworking budget\n\n📚 Learning budget\n\n💪 Free gym membership\n\n🧘 Mental wellness budget\n\n🖥 Home office budget\n\n🥧 Pay in crypto\n\n🥸 Pseudonymous\n\n💰 Profit sharing\n\n💰 Equity compensation\n\n⬜️ No whiteboard interview\n\n👀 No monitoring system\n\n🚫 No politics at work\n\n🎅 We hire old (and young)\n\n
\n\n#Location\nMontreal, Quebec
👉 Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Chan Zuckerberg Biohub - San Francisco is hiring a
Remote AI ML HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub Network has an immediate opening for an AI/ML High Performance Computing (HPC) Principal Engineer. The CZ Biohub Network is composed of several new institutes that the Chan Zuckerberg Initiative created to do great science that cannot be done in conventional environments. The CZ Biohub Network brings together researchers from across disciplines to pursue audacious, important scientific challenges. The Network consists of four institutes throughout the country; San Francisco, Silicon Valley, Chicago and New York City. Each institute closely collaborates with the major universities in its local area. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports several 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the country, with the mission of understanding the mysteries of the cell and how cells interact within systems.\n\nThe Biohub is expanding its global scientific leadership, particularly in the area of AI/ML, with the acquisition of the largest GPU cluster dedicated to AI for biology. The AI/ML HPC Principal Engineer will be tasked with helping to realize the full potential of this capability in addition to providing advanced computing capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other computing needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential –\n\n\n* Bachelor’s Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production compute, interconnect, storage hardware, software systems, storage subsystems\n\n* Configuring and administering parallel, network attached storage (Lustre, GPFS on ESS, NFS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, VAST, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.) and implementing fairshare, node sharing, backfill etc.. for compute and GPUs\n\n* Red Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* OpenACC, nvhpc, understanding of cuda driver compatibility issues\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software (Modules, SPACK)\n\n* Familiarity with source control tools (Git or SVN)\n\n* Experience with supporting use of popular ML frameworks such as Pytorch, Tensorflow\n\n* Familiarity with cybersecurity tools, methodologies, and best practices for protecting systems used for science\n\n* Experience with movement, storage, backup and archive of large scale data\n\n\n\n\nNice to have - \n\n\n* An advanced degree is strongly desired\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidate’s skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Node, Engineer and Linux jobs that are similar:\n\n
$57,500 — $85,000/year\n
\n\n#Benefits\n
💰 401(k)\n\n🌎 Distributed team\n\n⏰ Async\n\n🤓 Vision insurance\n\n🦷 Dental insurance\n\n🚑 Medical insurance\n\n🏖 Unlimited vacation\n\n🏖 Paid time off\n\n📆 4 day workweek\n\n💰 401k matching\n\n🏔 Company retreats\n\n🏬 Coworking budget\n\n📚 Learning budget\n\n💪 Free gym membership\n\n🧘 Mental wellness budget\n\n🖥 Home office budget\n\n🥧 Pay in crypto\n\n🥸 Pseudonymous\n\n💰 Profit sharing\n\n💰 Equity compensation\n\n⬜️ No whiteboard interview\n\n👀 No monitoring system\n\n🚫 No politics at work\n\n🎅 We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
👉 Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Chan Zuckerberg Biohub - San Francisco is hiring a
Remote HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub has an immediate opening for a High Performance Computing (HPC) Principal Engineer. The CZ Biohub is a one-of-a-kind independent non-profit research institute that brings together three leading universities - Stanford, UC Berkeley, and UC San Francisco - into a single collaborative technology and discovery engine. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports over 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the Bay Area, with the mission of understanding the underlying mechanisms of disease through the development of tools and technologies and the application to therapeutics and diagnostics.\n\nThis position will be tasked with strengthening and expanding the scientific computational capacity to further the Biohub’s expanding global scientific leadership. The HPC Principal Engineer will also provide IT capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other IT needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential –\n\n\n* Bachelor’s Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable. An advanced degree is strongly desired.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively to implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production storage hardware, software systems, storage networks and systems\n\n* Configuring and administering parallel, network attached storage (Lustre, NFS, ESS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.)\nRed Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software\n\n* Familiarity with source control tools (Git or SVN)\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* Principal Engineer = $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidate’s skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Engineer and Linux jobs that are similar:\n\n
$50,000 — $85,000/year\n
\n\n#Benefits\n
💰 401(k)\n\n🌎 Distributed team\n\n⏰ Async\n\n🤓 Vision insurance\n\n🦷 Dental insurance\n\n🚑 Medical insurance\n\n🏖 Unlimited vacation\n\n🏖 Paid time off\n\n📆 4 day workweek\n\n💰 401k matching\n\n🏔 Company retreats\n\n🏬 Coworking budget\n\n📚 Learning budget\n\n💪 Free gym membership\n\n🧘 Mental wellness budget\n\n🖥 Home office budget\n\n🥧 Pay in crypto\n\n🥸 Pseudonymous\n\n💰 Profit sharing\n\n💰 Equity compensation\n\n⬜️ No whiteboard interview\n\n👀 No monitoring system\n\n🚫 No politics at work\n\n🎅 We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
👉 Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.