Position Overview\n\nWe are looking for a Solutions Engineer at SingleStore who is passionate about removing data bottlenecks for their customers and enabling real-time data capabilities to some of the most difficult data challenges in the industry. In this role you will work directly with our sales teams, and channel partners to identify prospective and current customer pain points where SingleStore can remove those bottlenecks and deliver real-time capabilities. You will provide value-based demonstrations, presentations, and support proof of concepts to validate proposed solutions.\n\nAs a solutions engineer at SingleStore, you must share our passion for real-time data, fast analytics, and simplified data architecture. You must be comfortable in both high executive conversations as well as being able to deeply understand the technology and its value-proposition.\nAbout our Team\n\nAt SingleStore, the Solutions Engineering team epitomizes a dynamic blend of innovation, expertise, and a fervent commitment to meeting complex data challenges head-on. This team is composed of highly skilled individuals who are not just adept at working with the latest technologies but are also instrumental in ensuring that SingleStore is the perfect fit for our customers.\n\nOur team thrives on collaboration and determination, building some of the most cutting-edge deployments of SingleStore data architectures for our most strategic customers. This involves working directly with product management to ensure that our product is not only addressing current data challenges but is also geared up for future advancements.\n\nBeyond the technical prowess, our team culture is rooted in a shared passion for transforming how businesses leverage data. We are a community of forward-thinkers, where each member's contribution is valued in our collective pursuit of excellence. Our approach combines industry-leading engineering, visionary design, and a dedicated customer success ethos to shape the future of database technology. In our team, every challenge is an opportunity for growth, and we support each other in our continuous learning journey. At SingleStore, we're more than a team; we're innovators shaping the real-time data solutions of tomorrow.\nResponsibilities\n\n\n* Engage with both current and prospective clients to understand their technical and business challenges\n\n* Present and demonstrate SingleStore product offering to fortune 500 companies.\n\n* Enthusiastic about the data analytics and data engineering landscape\n\n* Provide valuable feedback to product teams based on client interactions\n\n* Stay up to date with database technologies and the SingleStore product offerings\n\n\n\nQualifications\n\n\n* Minimum of 3 years experience in a technical pre-sales role\n\n* Broad range of experience within large-scale database and/or data warehousing technologies\n\n* Excellent presentation and communication skills, with experience presenting to large corporate organizations\n\n* Experience with Kubernetes and Linux is a plus\n\n* Ability to communicate complex technical concepts for non-technical audiences.\n\n* Strong team player with interpersonal skills\n\n* Preferred experience with data engineering tools such as Apache Spark, Apache Kafka, and ETL Tools such as Talend or Informatica. \n\n* Next generation' personality (Cloud, AI experience)\n\n* Experience with some cutting edge Cloud Products/Companies\n\n* Demonstrated proficiency in ANSI SQL query languages\n\n* Demonstrated proficiency in Python, Scala or Java\n\n* Understanding of private and public cloud platforms such as AWS, Azure, GCP, VMware\n\n* Preferred experience with prompt Engineering\n\n\n\n\n\n\n\nSingleStore delivers the cloud-native database with the speed and scale to power the worldโs data-intensive applications. With a distributed SQL database that introduces simplicity to your data architecture by unifying transactions and analytics, SingleStore empowers digital leaders to deliver exceptional, real-time data experiences to their customers. SingleStore is venture-backed and headquartered in San Francisco with offices in Sunnyvale, Raleigh, Seattle, Boston, London, Lisbon, Bangalore, Dublin and Kyiv. \n\nConsistent with our commitment to diversity & inclusion, we value individuals with the ability to work on diverse teams and with a diverse range of people.\n\nTo all recruitment agencies: SingleStore does not accept agency resumes. Please do not forward resumes to SingleStore employees. SingleStore is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company.\n\n#li-remote #remote-li \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Sales and Engineer jobs that are similar:\n\n
$50,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nPune, Maharashtra, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAs a Staff Cloud Ops Engineer at Wrike, your primary strength lies in PostgreSQL, where you excel in designing, managing, and optimizing internal database systems. Your advanced skills extend to cloud and data center infrastructure, emphasizing security, containers, networking, monitoring, automation, and debugging. You proactively define tasks based on team objectives, guide others, and propose impactful infrastructure improvements.\n\nYou would have the opportunity to engage in high-end technical projects, including the implementation of Kafka, development of an internal database management system built on PostgreSQL, and collaboration on command and control systems within an international team. \n\nIn this role, you would join a core development team of 250+ engineers developing Wrike and become a part of the whole operations department which is exposed to various technologies and systems. Does this sound like you? If your answer is yes, we'd love to speak with you!\n\n\nMore about Your team\n\nWe have 14 folks in the SysOps Department, consisting of two teams distributed in Prague, Cyprus, and Tallinn. As a core team member of our team you will be:\n\n\n* Managing the Wrike product infrastructure with a focus on data infrastructure\n\n* Designing reliable solutions to ensure a product uptime SLA of 99.99%\n\n* Working with GCP and other providers in the IaC paradigm\n\n* Implementing and supporting new infrastructure services\n\n* Actively participating in incident response and management, including on-call duties\n\n* Developing and maintaining professional connections within and outside of the team\n\n* Driving end-to-end completion of significant projects, supported by senior team members\n\n\n\n\n\nTechnical Environment:\n\nWe run a Java based SaaS application in Kubernetes for a massive audience of over 20,000 organizations. Our data resides in more than 100 highly available PostgreSQL clusters across our production and pre-production environments. To ensure seamless data access, we've implemented robust data streaming pipelines. These pipelines empower our back-end and analytics teams to swiftly retrieve accurate data, contributing to the efficiency and success of our operations.\n\n\nKey technologies and tools include:\n\n\n* PostgreSQL as DB platform\n\n* Kafka and rabbitmq for messaging\n\n* Kubernetes and helm (Service-oriented architecture)\n\n* Nginx, HAproxy and Istio for load balancing\n\n* GCP, AWS and Cloudflare are our cloud providers\n\n* Puppet, Ansible and Terraform for defining everything as a code\n\n* Python to automate everything\n\n* Prometheus (VictoriaMetrics) and Zabbix for monitoring\n\n* Graylog, Logstash, Fluentd for logging\n\n* Jenkins and Gitlab-CI for building pipelines\n\n\n\n\n\nYou will achieve your best if you have\n\n\n* Advanced experience in designing, managing, and optimizing database management systems based on PostgreSQL\n\n* Advanced knowledge in at least two of the following areas, intermediate knowledge of the rest: Data networks, Security, Databases, Cloud providers, Process automation, Containerised application management\n\n* Advanced Linux administration skills with experience in maintaining highly available infrastructure for web application stack\n\n* Sufficient scripting skills in Python/Bash or other scripting languages\n\n* Upper Intermediate English skills\n\n\n\n\n\nYou will stand out with\n\n\n* Advanced experience running Kubernetes platform\n\n* Advanced experience with any Cloud Provider management using IAC (AWS/GCP/Azure).\n\n* Strong understanding of Linux fundamentals: security principles, hardware, troubleshooting, etc.\n\n* Monitoring experience with Zabbix, Grafana or Prometheus\n\n* Advanced experience with any of System Configuration Management tools (Ansible/Puppet/Salt etc.)\n\n\n\n\n\nPerks of working with Wrike\n\n\n28 calendar days of paid vacation\n\nSick leave compensation\n\nLife insurance plan\n\nHealth insurance plan\n\nFitness plan (800 EUR/year)\n\nParental leave\n\n2 volunteer days\n\nFull-remote & On-demand access to Co-working space\n\nUtility allowance (30 EUR/month, subject to taxation)\n\n\n\n\nYour recruitment buddy will be Marketa Rezacova, IT Recruiter.\n\n\n#LI-MR1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, Java, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nTallinn, Harju, Estonia
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Coalfire\nCoalfire is on a mission to make the world a safer place by solving our clientsโ toughest cybersecurity challenges. We work at the cutting edge of technology to advise, assess, automate, and ultimately help companies navigate the ever-changing cybersecurity landscape. We are headquartered in Denver, Colorado with offices across the U.S. and U.K., and we support clients around the world. \nBut thatโs not who we are โ thatโs just what we do. \n \nWe are thought leaders, consultants, and cybersecurity experts, but above all else, we are a team of passionate problem-solvers who are hungry to learn, grow, and make a difference. \n \nAnd weโre growing fast. \n \nWeโre looking for a Site Reliability Engineer I to support our Managed Services team. \n\n\nPosition Summary\nAs a Junior Site Reliability Engineer at Coalfire within our Managed Services (CMS) group, you will be a self-starter, passionate about cloud technology, and thrive on problem solving. You will work within major public clouds, utilizing automation and your technical abilities to operate the most cutting-edge offerings from Cloud Service Providers (CSPs). This role directly supports leading cloud software companies to provide seamless reliability and scalability of their SaaS product to the largest enterprises and government agencies around the world.\n \nThis can be a remote position (must be located in the United States).\n\n\n\nWhat You'll Do\n* Become a member of a highly collaborative engineering team offering a unique blend of Cloud Infrastructure Administration, Site Reliability Engineering, Security Operations, and Vulnerability Management across multiple clients.\n* Coordinate with client product teams, engineering team members, and other stakeholders to monitor and maintain a secure and resilient cloud-hosted infrastructure to established SLAs in both production and non-production environments.\n* Innovate and implement using automated orchestration and configuration management techniques. Understand the design, deployment, and management of secure and compliant enterprise servers, network infrastructure, boundary protection, and cloud architectures using Infrastructure-as-Code.\n* Create, maintain, and peer review automated orchestration and configuration management codebases, as well as Infrastructure-as-Code codebases. Maintain IaC tooling and versioning within Client environments.\n* Implement and upgrade client environments with CI/CD infrastructure code and provide internal feedback to development teams for environment requirements and necessary alterations. \n* Work across AWS, Azure and GCP, understanding and utilizing their unique native services in client environments.\n* Configure, tune, and troubleshoot cloud-based tools, manage cost, security, and compliance for the Clientโs environments.\n* Monitor and resolve site stability and performance issues related to functionality and availability.\n* Work closely with client DevOps and product teams to provide 24x7x365 support to environments through Client ticketing systems.\n* Support definition, testing, and validation of incident response and disaster recovery documentation and exercises.\n* Participate in on-call rotations as needed to support Client critical events, and operational needs that may lay outside of business hours.\n* Support testing and data reviews to collect and report on the effectiveness of current security and operational measures, in addition to remediating deviations from current security and operational measures.\n* Maintain detailed diagrams representative of the Clientโs cloud architecture.\n* Maintain, optimize, and peer review standard operating procedures, operational runbooks, technical documents, and troubleshooting guidelines\n\n\n\nWhat You'll Bring\n* BS or above in related Information Technology field or equivalent combination of education and experience\n* 2+ years experience in 24x7x365 production operations\n* ยทFundamental understanding of networking and networking troubleshooting.\n* 2+ years experience installing, managing, and troubleshooting Linux and/or Windows Server operating systems in a production environment.\n* 2+ years experience supporting cloud operations and automation in AWS, Azure or GCP (and aligned certifications)\n* 2+ years experience with Infrastructure-as-Code and orchestration/automation tools such as Terraform and Ansible\n* Experience with IaaS platform capabilities and services (cloud certifications expected)\n* Experience within ticketing tool solutions such as Jira and ServiceNow\n* Experience using environmental analytics tools such as Splunk and Elastic Stack for querying, monitoring and alerting\n* Experience in at least one primary scripting language (Bash, Python, PowerShell)\n* Excellent communication, organizational, and problem-solving skills in a dynamic environment\n* Effective documentation skills, to include technical diagrams and written descriptions\n* Ability to work as part of a team with professional attitude and demeanor\n\n\n\nBonus Points\n* Previous experience in a consulting role within dynamic, and fast-paced environments\n* Previous experience supporting a 24x7x365 highly available environment for a SaaS vendor\n* Experience supporting security and/or infrastructure incident handling and investigation, and/or system scenario re-creation\n* Experience working within container orchestration solutions such as Kubernetes, Docker, EKS and/or ECS\n* Experience working within an automated CI/CD pipeline for release development, testing, remediation, and deployment\n* Cloud-based networking experience (Palo Alto, Cisco ASAv, etc.โฆ)\n* Familiarity with frameworks such as FedRAMP, FISMA, SOC, ISO, HIPAA, HITRUST, PCI, etc.\n* Familiarity with configuration baseline standards such as CIS Benchmarks & DISA STIG\n* Knowledge of encryption technologies (SSL, encryption, PKI)\n* Experience with diagramming (Visio, Lucid Chart, etc.) \n* Application development experience for cloud-based systems\n\n\n\n\n\nWhy You'll Want to Join Us\n\n\nAt Coalfire, youโll find the support you need to thrive personally and professionally. In many cases, we provide a flexible work model that empowers you to choose when and where youโll work most effectively โ whether youโre at home or an office. \nRegardless of location, youโll experience a company that prioritizes connection and wellbeing and be part of a team where people care about each other and our communities. Youโll have opportunities to join employee resource groups, participate in in-person and virtual events, and more. And youโll enjoy competitive perks and benefits to support you and your family, like paid parental leave, flexible time off, certification and training reimbursement, digital mental health and wellbeing support membership, and comprehensive insurance options. \n\n\nAt Coalfire, equal opportunity and pay equity is integral to the way we do business. A reasonable estimate of the compensation range for this role is $95,000 to $110,000 based on national salary averages. The actual salary offer to the successful candidate will be based on job-related education, geographic location, training, licensure and certifications and other factors. You may also be eligible to participate in annual incentive, commission, and/or recognition programs.All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. \n \n#LI-REMOTE \n#LI-JB1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, DevOps, Cloud, Junior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nUnited States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe are seeking Data Engineers with a passion for sports to develop cloud-based data pipelines and automated data processing for our world-class sports intelligence platforms in baseball, basketball, cricket, eSports, football (American), golf, hockey, soccer, and tennis. Through your work, you can support the professional teams in our exclusive partner network in their efforts to compete and win championships. \n\nZelus Analytics is a fully remote company working directly with teams across the NBA, MLB, NFL, IPL and NHL, in addition to a number of soccer teams around the globe. Zelus unites a fast-growing startup environment with a research-focused culture that embraces our core values of integrity, innovation, and inclusion. We pride ourselves on providing meaningful mentorship that offers our team the opportunity to develop and expand their skill sets while also engaging with the broader analytics community. In so doing, we hope to create a new path for a broader group of highly talented people to push the cutting edge of sports analytics.\n\nWe believe that a diverse team is vital to building the worldโs best sports intelligence platform. Thus, we strongly encourage you to apply if you identify with any marginalized community across race, ethnicity, gender, sexual orientation, veteran status, or disability. At Zelus, we are committed to creating an inclusive environment where all of our employees are enabled and empowered to succeed and thrive.\n\nAs Zelus employees advance in experience and level, they are expected to build on their competencies and expertise and demonstrate increasing impact, independence, and leadership within their roles.\n\nMore specifically, as a Zelus Data Engineer, you will be expected to:\n\n\n* Design, develop, document, and maintain the schemas and ETL pipelines for our internal sports databases and data warehouses\n\n* Implement and test collection, mapping, and storage procedures for secure access to team, league, and third-party data sources\n\n* Develop algorithms for quality assurance and imputation to prepare data for exploratory analysis and quantitative modeling\n\n* Profile and optimize automated data processing tasks\n\n* Coordinate with data providers around planned changes to raw data feeds\n\n* Deploy and maintain system and database monitoring tools\n\n* Collaborate and communicate effectively in a distributed work environment\n\n* Fulfill other related duties and responsibilities, including rotating platform support\n\n\n\n\nAdditionally, a Data Engineer II will be expected to:\n\n\n* Create data ingestion and integration workflows that scale and can be easily adapted to future use cases\n\n* Assess, provision, monitor, and maintain the appropriate infrastructure and tooling to execute data engineering workflows\n\n\n\n\nAdditionally, a Senior Data Engineer will be expected to:\n\n\n* Research, design, and test generalizable software architectures for data ingestion, processing, and integration and guide organizational adoption\n\n* Collaborate with data science to design and implement vendor-agnostic data models that support downstream modeling efforts\n\n* Lead team-wide implementation of data engineering standards\n\n* Effectively communicate complex technical concepts to both internal and external audiences\n\n* Provide guidance and technical mentorship for junior engineers \n\n* Assist with recruiting and outreach for the engineering team, including building a diverse network of future candidates\n\n\n\n\nAdditionally, a Senior Data Engineer II will be expected to:\n\n\n* Identify and implement generalizable strategies for infrastructure maintenance and data-related cost savings\n\n* Break down complex data engineering projects into actionable work plans including proposed task assignments with clear design specifications\n\n* Assist in defining data engineering standards for the organization\n\n\n\n\nA qualified Data Engineer candidate will be able to demonstrate several of the following and will be excited to learn the rest through the mentorship provided at Zelus:\n\n\n* Academic and/or industry experience in back-end software design and development\n\n* Experience with ETL architecture and development in a cloud-based environment\n\n* Fluency in SQL development and an understanding of database and data warehousing technologies\n\n* Proficiency with Python (preferred), Scala, and/or other data-oriented programming languages\n\n* Experience with automated data quality validation across large data sets\n\n* Familiarity working with Linux servers in a virtualized/distributed environment\n\n* Strong software-engineering and problem-solving skills\n\n\n\n\nA qualified Senior Data Engineer candidate will be able to demonstrate all of the above at a higher level of competency plus the following:\n\n\n* Expertise developing complex databases and data warehouses for large-scale, cloud-based analytics systems\n\n* Experience with task orchestration and workflow automation tools\n\n* Experience building and overseeing team-wide data quality initiatives\n\n* Experience adapting, retraining, and retooling in a rapidly changing technology environment\n\n* Desire and ability to successfully mentor junior engineers\n\n\n\n\nStarting salaries range from*:\n\n\n* $87,000 to $102,000 for Data Engineer\n\n* $102,000 to $118,000 for Data Engineer II\n\n* $118,000 to $136,000 for Senior Data Engineer\n\n* $136,000 to $160,000 for Senior Data Engineer II\n\n\n\n\n*Compensation paid in non-US currency will be in a comparable range adjusted by differences in total cost of employment.\n\nZelus has a fully distributed workforce, spanning multiple states and countries, with a formal process for establishing compensation equity across its global staff. In addition to competitive salaries, our full-time compensation packages include equity grants and comprehensive benefits, such as an annual incentive bonus plan, supplemental health, vision, and dental insurance, and flexible PTO, all of which allow us to attract and retain a world-class team.\n\nAs an equal opportunity employer, Zelus does not discriminate on the basis of race, ethnicity, color, religion, creed, gender, gender expression or identification, sexual orientation, marital status, age, national origin, disability, genetic information, military status, or any other characteristic protected by law. It is our policy to provide reasonable accommodations for applicants and employees with disabilities. Please let us know if reasonable accommodation is needed to participate in the job application or interview process.\n\nIn most jurisdictions, Zelus is an at-will employer; employment at Zelus is for an indefinite period of time and is subject to termination by the employer or the employee at any time, with or without cause or notice. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Senior, Engineer and Linux jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWhat Youโll Do:\n\nWeโre looking for a talented and intensely curious Senior AWS Cloud Engineer who is nimble and focused with a startup mentality. In this newly created role you will be the liaison between data engineers, data scientists and analytics engineers. You will work to create cutting-edge architecture that provides the increased performance, scalability and concurrency for Data Science and Analytics workflows.\n\n Responsibilities\n\n\n* Provide AWS Infrastructure support and Systems Administration in support of new and existing products implemented thru: IAM, EC2, S3, AWS Networking (VPC, IGW, NGW, ALB, NLB, etc.), Terraform, Cloud Formation templates and Security: Security Groups, Guard Duty, Cloud Trail, Config and WAF.\n\n* Monitor and maintain production, development, and QA cloud infrastructure resources for compliance with all Six Pillars of AWS Well-Architected Framework - including Security Pilar.\n\n* Develop and maintain Continuous Integration (CI) and Continuous Deployment (CD) pipelines needed to automate testing and deployment of all production software components as part of a fast-paced, agile Engineering team. Technologies required: ElastiCache, Bitbucket Pipelines, Github, Docker Compose, Kubernetes, Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS) and Linux based server instances.\n\n* Develop and maintain Infrastructure as Code (IaC) services for creation of ephemeral cloud-native infrastructure hosted on Amazon Web Services (AWS) and Google Cloud Platform (GCP). Technologies required: AWS AWS Cloud Formation, Google Cloud Deployment Manager, AWS SSM, YAML, JSON, Python.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 99.99% uptime. Technologies required: AWS IAM, AWS Cloud Watch, AWS Event Bridge, AWS SSM, AWS SQS, AWS SNS, AWS Lambda and Step Functions, Python, Java, RDS Postgres, RDS MySQL, AWS S3, Docker, AWS Elasticsearch, Kibana, AWS Amplify.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 100% cybersecurity compliance and surveillance. Technologies required: AWS SSM, YAML, JSON, Python, RDS Postgres, Tenable, CrowdStrike EPP, Sophos EPP, Wiz CSPM, Linux Bash scripts.\n\n* Design and code technical solutions that improve the scalability, performance, and reliability of all Data Acquisition pipelines. Technologies required: Google ADs APIs, Youtube Data APIs, Python, Java, AWS Glue, AWS S3, AWS SNS, AWS SQS, AWS KMS, AWS RDS Postgres, AWS RDS MySQL, AWS Redshift.\n\n* Monitor and remediate server and application security events as reported by CrowdStrike EPP, Tenable, WIZ CSPM, Invicti\n\n\n\n\nWho you are:\n\n\n* Minimum of 5 years of System Administration or Devops Engineering experience on AWS\n\n* Track record of success in System Administration, including System Design, Configuration, Maintenance, and Upgrades\n\n* Excels in architecting, designing, developing, and implementing cloud native AWS platforms and services.\n\n* Knowledgeable in managing cloud infrastructure in a production environment to ensure high availability and reliability.\n\n* Proficient in automating system deployment, operation, and maintenance using Infrastructure as Code - Ansible, Terraform, CloudFormation, and other common DevOps tools and scripting.\n\n* Experienced with Agile processes in a structured setting required; Scrum and/or Kanban.\n\n* Security and compliance standards experience such as PCI and SOC as well as data privacy and protection standards, a big plus.\n\n* Experienced in implementing Dashboards and data for decision-making related to team and system performance, rely heavily on telemetry and monitoring.\n\n* Exceptional analytical capabilities.\n\n* Strong communication skills and ability to effectively interact with Engineering and Business Stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Bachelor's degree in technology, engineering, or related field\n\n* AWS Certifications โ Solutions Architect, DevOps Engineer etc.\n\n\n\n\nWhy Spotter:\n\n\n* Medical and vision insurance covered up to 100%\n\n* Dental insurance\n\n* 401(k) matching\n\n* Stock options\n\n* Complimentary gym access\n\n* Autonomy and upward mobility\n\n* Diverse, equitable, and inclusive culture, where your voice matters\n\n\n\n\nIn compliance with local law, we are disclosing the compensation, or a range thereof, for roles that will be performed in Culver City. Actual salaries will vary and may be above or below the range based on various factors including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. A reasonable estimate of the current pay range is: $100-$500K salary per year. The range listed is just one component of Spotterโs total compensation package for employees. Other rewards may include an annual discretionary bonus and equity. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, Testing, DevOps, Cloud, Senior, Engineer and Linux jobs that are similar:\n\n
$50,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLos Angeles, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Chan Zuckerberg Biohub - San Francisco is hiring a
Remote AI ML HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub Network has an immediate opening for an AI/ML High Performance Computing (HPC) Principal Engineer. The CZ Biohub Network is composed of several new institutes that the Chan Zuckerberg Initiative created to do great science that cannot be done in conventional environments. The CZ Biohub Network brings together researchers from across disciplines to pursue audacious, important scientific challenges. The Network consists of four institutes throughout the country; San Francisco, Silicon Valley, Chicago and New York City. Each institute closely collaborates with the major universities in its local area. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports several 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the country, with the mission of understanding the mysteries of the cell and how cells interact within systems.\n\nThe Biohub is expanding its global scientific leadership, particularly in the area of AI/ML, with the acquisition of the largest GPU cluster dedicated to AI for biology. The AI/ML HPC Principal Engineer will be tasked with helping to realize the full potential of this capability in addition to providing advanced computing capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other computing needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ\n\n\n* Bachelorโs Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production compute, interconnect, storage hardware, software systems, storage subsystems\n\n* Configuring and administering parallel, network attached storage (Lustre, GPFS on ESS, NFS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, VAST, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.) and implementing fairshare, node sharing, backfill etc.. for compute and GPUs\n\n* Red Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* OpenACC, nvhpc, understanding of cuda driver compatibility issues\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software (Modules, SPACK)\n\n* Familiarity with source control tools (Git or SVN)\n\n* Experience with supporting use of popular ML frameworks such as Pytorch, Tensorflow\n\n* Familiarity with cybersecurity tools, methodologies, and best practices for protecting systems used for science\n\n* Experience with movement, storage, backup and archive of large scale data\n\n\n\n\nNice to have - \n\n\n* An advanced degree is strongly desired\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโs skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Node, Engineer and Linux jobs that are similar:\n\n
$57,500 — $85,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Chan Zuckerberg Biohub - San Francisco is hiring a
Remote HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub has an immediate opening for a High Performance Computing (HPC) Principal Engineer. The CZ Biohub is a one-of-a-kind independent non-profit research institute that brings together three leading universities - Stanford, UC Berkeley, and UC San Francisco - into a single collaborative technology and discovery engine. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports over 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the Bay Area, with the mission of understanding the underlying mechanisms of disease through the development of tools and technologies and the application to therapeutics and diagnostics.\n\nThis position will be tasked with strengthening and expanding the scientific computational capacity to further the Biohubโs expanding global scientific leadership. The HPC Principal Engineer will also provide IT capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other IT needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ\n\n\n* Bachelorโs Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable. An advanced degree is strongly desired.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively to implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production storage hardware, software systems, storage networks and systems\n\n* Configuring and administering parallel, network attached storage (Lustre, NFS, ESS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.)\nRed Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software\n\n* Familiarity with source control tools (Git or SVN)\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* Principal Engineer = $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโs skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Engineer and Linux jobs that are similar:\n\n
$50,000 — $85,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.