\nWe're making the world of digital assets accessible and secure for everyone. Join the mission. \n\n\nFounded in 2014, Ledger is the global platform for digital assets and Web3. Over 20% of the worldโs crypto assets are secured through our Ledger Nanos. Headquartered in Paris and Vierzon, with offices in UK, US, Switzerland and Singapore, Ledger has a team of more than 700 professionals developing a variety of products and services to enable individuals and companies to securely buy, store, swap, grow and manage crypto assets โ including the Ledger hardware wallets line with more than 7 millions units already sold in 200 countries. \n\n\nAt Ledger, we embody the values that make us unique: Pragmatism, Audacity, Commitment, Trust and Transparency. Hear from our employees how they shape the work we do here. \n\n\n\n Primary responsibilities:\n* Reporting to our SRE Manager, you will be a member of Ledgerโs SRE team driving technology's transformation by launching new platforms, building tools, automating away complex issues, and integrating with the latest technology. \n* Site Reliability Engineers leverage their experience as software and systems engineers to ensure applications integrated by SRE are available, have full stack observability and have continuous improvement through code and automation.\n* We are looking for an experienced candidate in reliability engineering who thrives on and enjoys solving complex problems through innovation and impacting change at scale. \n* You will bring a strong mixture of software engineering, operations, and systems engineering experience to the role, and you have experience in the integration of complex systems.\n\n\n\nIn this role you will:\n* Participate in building a DevOps / SRE culture and enable the transition to modern infrastructure management and deployment practices;\n* Participate in building the SRE team roadmap (vision and delivery accountability). Anticipate stakeholder needs, game-changing technologies emergence and challenge scope / deadlines;\n* Perform integration of platform software components;\n* Participate to design and deliver solutions to improve the availability, scalability, latency, and efficiency of systems\n* Influence and create standards & best practices in support of service level objectives;\n* Automate key SRE metrics including SLOs/SLAs and error budgets;\n* Provide expert support to our level-2/application support team, to troubleshoot priority incidents, and conduct post-mortems;\n* Apply analytics on past incidents and usage patterns to predict issues and take proactive actions;\n* Ensure control of technical debt and promote quality practices\n* Follow SRE and chaos engineering approaches across all strategic systems to predict in coordination with Service Design and prevent outages and improve solution availability\n* Drive adoption of self-healing and resiliency patterns such as circuit breaker, bulkhead, etc.\n* Deisng and conduct performance tests, identify the bottlenecks, opportunities for optimization\n\n\n\nWhat weโre looking for: \n* 5+ years on cloud engineering at scale, on organizations operating SaaS solutions\n* Proficiency in working in Unix/Linux environments, Git, Python, Terraform, Kubernetes, AWS cloud solutions and architectures, CI/CD tools, Argocd, Ansible, configuration management, etc.\n* Strong knowledge on observability practices, with experience implementing and managing Logging, Monitoring and Alerting framework with solutions such as Datadog or Prometheus/Grafana/Loki.\n* Experience of cross-functional work and the ability to demonstrate a collaborative approach with regards to building key relationships across the organization and define projects scope, goals, plan and deliverables\n* โCustomer focusedโ with the ability to identify and understand both internal and external customer's needs\n* Creative problem-solving and analysis skills with an ability to identify develop and implement solutions to meet the needs of the business\n* Excellent presentation and written communication Ability to deal with ambiguity, high level of pressure and rapidly changing environments\n* Engineering degree\n\n\n\nWhatโs in it for you?\n* Equity: Employees are the foundation of our success, and we award stock options so you can share in that success as we grow. \n* Flexibility: A hybrid work policy.\n* Social: Annual company outing for Ledgerdary Days, plus frequent social events, snacks and drinks\n* Medical: Comprehensive health insurance policy offering extensive medical, dental and vision care coverage. \n* Well-being: Personal development, coaching & fitness with our dedicated partners.\n* Vacation: Five weeks of paid leave per year, in addition to national holidays and rest & relaxation (RTT) days.\n* High tech: Access to high performance office equipment and gadgets, including Apple products. \n* Transport: Ledger reimburses part of your preferred means of transportation. \n* Discounts: Employee discount on all our products.\n\n\n\n\n\n\nWe are an equal opportunity employer for all without any distinction of gender, ethnicity, religion, sexual orientation, social status, disability or age. \n\n\n#LI-HG #LI-Hybrid\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Crypto, DevOps, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nParis, France
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
What to Expect \n\nWild Alaskan Company is a data-driven, tech-enabled marketing and cold chain logistics company that sells seafood. Our mission is to accelerate humanityโs transition to sustainable food systems by fostering meaningful, interconnected relationships between human beings, wild seafood, and the planet. To meet this goal, WAC is constantly striving to innovate technology that facilitates (a) a more connected and vertically integrated supply chain via a proprietary end-to-end logistics platform, (b) a seamless buying experience via best-in-class ecommerce and POS solutions, and (c) human-to-human connectivity via proprietary CRM, content platform and member portal. To that end, WAC is seeking a hardworking and passionate Senior Software Engineer to join the team.\nWhat You'll Do\n\nAs a Senior Software Engineer, you will be joining a growing team of talented, driven engineers who are passionate about their work and the mission. You will have an opportunity to make an important difference in the future of sustainable food systems by building technology that enables the efficient production, access, and distribution of food to individuals across the globe. Youโll be bringing your expertise to the technology stack of Wild Alaskanโs proprietary order and inventory management systems, as well as our ecommerce and content platforms.\n\nYou will work as an individual contributor in collaboration with the VP of Software Architecture, Digital Product leadership, Product Managers, Principal Engineer, other Senior Engineers, and the Data Science and Analytics Team to fully support and expand our home-grown technology stack in Laravel and Vue.js.\nYour Day-to-Day\n\n\n* Develop robust, scalable, and efficient web applications using Laravel and Vue.js, ensuring high performance and optimal user experience.\n\n* Collaborate with product managers, designers, and other stakeholders to gather requirements and translate them into technical specifications.\n\n* Design and implement database structures and queries to support application functionality and performance.\n\n* Write clean, maintainable, and well-documented code following coding standards and best practices.\n\n* Conduct code reviews and provide constructive feedback to your peers to ensure code quality and adherence to standards.\n\n* Optimize application performance through performance profiling, code optimization, and caching techniques.\n\n* Troubleshoot and debug complex issues, identify root causes and implement effective solutions.\n\n* Stay up-to-date with industry trends and emerging technologies and apply them to improve our development processes and methodologies.\n\n* Share your knowledge and expertise to foster team growth.\n\n* Collaborate with the QA team to develop comprehensive test plans and ensure high-quality software delivery.\n\n* Participate in Agile development methodologies, including sprint planning, task estimation, and progress tracking.\n\n* Continuously monitor and improve application security, identifying and mitigating potential vulnerabilities.\n\n\n\nWhat You Bring \n\n\n* Mastery of Laravel and Vue.js with 8+ years of experience.\n\n* Strong OOP and code planning proficiency.\n\n* Strong TDD and testing methodologies (PHPUnit.)\n\n* Mastery of building RESTful APIs and single-page applications.\n\n* Proficiency in front-end web technologies such as HTML5, CSS3, JavaScript, and related frameworks (e.g., Bootstrap, Tailwind CSS).\n\n* Solid understanding of relational databases (e.g., MySQL, PostgreSQL) and ability to write efficient SQL queries.\n\n* Mastery of version control systems (e.g., Git) and familiarity with collaborative development workflows (we use feature branching and rebase).\n\n* Familiarity with deployment and hosting environments, including cloud platforms (e.g., AWS) and containerization (e.g., Docker).\n\n* Strong understanding of best-in-class database design practices.\n\n* Strong understanding of frontend performance to optimize user experience and response times.\n\n* Ability to identify technical debt and develop effective strategies to mitigate it.\n\n* Ability to identify gaps in the technology used and propose suitable solutions for enhancing system functionality.\n\n* Proficiency in automated testing to ensure the reliability and quality of the software system.\n\n* Ability to plan and execute incremental improvements to continuously enhance the software system's performance and functionality. \n\n* Excellent communication skills and ability to collaborate effectively with cross-functional teams.\n\n* Self-sufficient and capable of working independently to complete tasks and troubleshoot issues.\n\n* Self-motivated with a passion for learning and staying updated with the latest technologies and industry trends.\n\n\n\nNice to Haves\n\n\n* Knowledge of server-side rendering (SSR) and modern JavaScript Framework tools (e.g., Nuxt.js)\n\n* Knowledge of Typescript\n\n* Familiarity with DevOps practices and CI/CD pipelines\n\n* Experience with UI/UX\n\n* E-commerce Experience\n\n* Experience using BI Tools such as Looker and Google Analytics\n\n* Food Industry experience\n\n* Experience working in start-up environments\n\n\n\nLocation \n\n100% remote with occasional travel for in-person team and companywide retreats. \n\nThe starting salary range for this position is $130,000 - $170,000, commensurate with skills and experience. Wild Alaskanโs benefits package includes health, vision, and dental insurance, 401k, PTO, safe/sick time, vacation, parental leave and more, as well as a delicious box of free fish every month. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, DevOps, JavaScript, Laravel, Cloud, Senior, Marketing, Engineer and Ecommerce jobs that are similar:\n\n
$50,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nDenver, Colorado, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout You\n\n\nWe are seeking a skilled Analytics Engineer to join our dynamic Data Team. The ideal candidate will have a comprehensive understanding of the data lifecycle from ingestion to consumption, with a particular focus on data modeling. This role will support various business domains, predominantly Finance, by organizing and structuring data to support robust analytics and reporting.\n\n\nThis role will be part of a highly collaborative team made up of US and Brazil-based Teachable and Hotmart employees.\n\n\nWhat Youโll Do\n\n\n* Data Ingestion to Consumption: Manage the flow of data from ingestion to final consumption. Organize data, understand modern data structures and file types, and ensure proper storage in data lakes and data warehouses.\n\n* Data Modeling: Develop and maintain entity-relationship models. Relate business and calculation rules to data models to ensure data integrity and relevance.\n\n* Pipeline Implementation: Design and implement data pipelines using preferrable SQL or Python to ensure efficient data processing and transformation.\n\n* Reporting Support: Collaborate with business analysts and other stakeholders to understand reporting needs and ensure that data structures support these requirements.\n\n* Documentation: Maintain thorough documentation of data models, data flows, and data transformation processes.\n\n* Collaboration: Work closely with other members of the Data Team and cross-functional teams to support various data-related projects.\n\n* Quality Assurance: Implement and monitor data quality checks to ensure accuracy and reliability of data.\n\n* Cloud Technologies: While the focus is on data modeling, familiarity with cloud technologies and platforms (e.g., AWS) is a plus.\n\n\n\n\n\nWhat Youโll Bring\n\n\n* 3+ years of experience working within data engineering, analytics engineering and/or similar functions.\n\n* Experience collaborating with business stakeholders to build and support data projects.\n\n* Experience with database languages, indexing, and partitioning to handle large volumes of data and create optimized queries and databases.\n\n* Experience in file manipulation and organization, such as Parquet.\n\n* Experience with the "ETL/ELT as code" approach for building Data Marts and Data Warehouses.\n\n* Experience with cloud infrastructure and knowledge of solutions like Athena, Redshift Spectrum, and SageMaker.\n\n* Experience with Apache Airflow for creating DAGs and various purposes.\n\n* Critical thinking for evaluating contexts and making decisions about delivery formats that meet the companyโs needs (e.g., materialized views, etc.).\n\n* Knowledge in development languages, preferably Python or Spark.\n\n* Knowledge in SQL.\n\n* Knowledge of S3, Redshift, and PostgreSQL.\n\n* Experience in developing highly complex historical transformations. Utilization of events is a plus.\n\n* Experience with ETL orchestration and updates.\n\n* Experience with error and inconsistency alerts, including detailed root cause analysis, correction, and improvement proposals.\n\n* Experience with documentation and process creation.\n\n* Knowledge of data pipeline and LakeHouse technologies is a plus.\n\n\n\n\n\nWhat Youโll Bring \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Cloud and Engineer jobs that are similar:\n\n
$62,500 — $117,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSรฃo Paulo, Sรฃo Paulo, Brazil
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Coalfire\nCoalfire is on a mission to make the world a safer place by solving our clientsโ toughest cybersecurity challenges. We work at the cutting edge of technology to advise, assess, automate, and ultimately help companies navigate the ever-changing cybersecurity landscape. We are headquartered in Denver, Colorado with offices across the U.S. and U.K., and we support clients around the world. \nBut thatโs not who we are โ thatโs just what we do. \n \nWe are thought leaders, consultants, and cybersecurity experts, but above all else, we are a team of passionate problem-solvers who are hungry to learn, grow, and make a difference. \n \nAnd weโre growing fast. \n \nWeโre looking for a Site Reliability Engineer I to support our Managed Services team. \n\n\nPosition Summary\nAs a Junior Site Reliability Engineer at Coalfire within our Managed Services (CMS) group, you will be a self-starter, passionate about cloud technology, and thrive on problem solving. You will work within major public clouds, utilizing automation and your technical abilities to operate the most cutting-edge offerings from Cloud Service Providers (CSPs). This role directly supports leading cloud software companies to provide seamless reliability and scalability of their SaaS product to the largest enterprises and government agencies around the world.\n \nThis can be a remote position (must be located in the United States).\n\n\n\nWhat You'll Do\n* Become a member of a highly collaborative engineering team offering a unique blend of Cloud Infrastructure Administration, Site Reliability Engineering, Security Operations, and Vulnerability Management across multiple clients.\n* Coordinate with client product teams, engineering team members, and other stakeholders to monitor and maintain a secure and resilient cloud-hosted infrastructure to established SLAs in both production and non-production environments.\n* Innovate and implement using automated orchestration and configuration management techniques. Understand the design, deployment, and management of secure and compliant enterprise servers, network infrastructure, boundary protection, and cloud architectures using Infrastructure-as-Code.\n* Create, maintain, and peer review automated orchestration and configuration management codebases, as well as Infrastructure-as-Code codebases. Maintain IaC tooling and versioning within Client environments.\n* Implement and upgrade client environments with CI/CD infrastructure code and provide internal feedback to development teams for environment requirements and necessary alterations. \n* Work across AWS, Azure and GCP, understanding and utilizing their unique native services in client environments.\n* Configure, tune, and troubleshoot cloud-based tools, manage cost, security, and compliance for the Clientโs environments.\n* Monitor and resolve site stability and performance issues related to functionality and availability.\n* Work closely with client DevOps and product teams to provide 24x7x365 support to environments through Client ticketing systems.\n* Support definition, testing, and validation of incident response and disaster recovery documentation and exercises.\n* Participate in on-call rotations as needed to support Client critical events, and operational needs that may lay outside of business hours.\n* Support testing and data reviews to collect and report on the effectiveness of current security and operational measures, in addition to remediating deviations from current security and operational measures.\n* Maintain detailed diagrams representative of the Clientโs cloud architecture.\n* Maintain, optimize, and peer review standard operating procedures, operational runbooks, technical documents, and troubleshooting guidelines\n\n\n\nWhat You'll Bring\n* BS or above in related Information Technology field or equivalent combination of education and experience\n* 2+ years experience in 24x7x365 production operations\n* ยทFundamental understanding of networking and networking troubleshooting.\n* 2+ years experience installing, managing, and troubleshooting Linux and/or Windows Server operating systems in a production environment.\n* 2+ years experience supporting cloud operations and automation in AWS, Azure or GCP (and aligned certifications)\n* 2+ years experience with Infrastructure-as-Code and orchestration/automation tools such as Terraform and Ansible\n* Experience with IaaS platform capabilities and services (cloud certifications expected)\n* Experience within ticketing tool solutions such as Jira and ServiceNow\n* Experience using environmental analytics tools such as Splunk and Elastic Stack for querying, monitoring and alerting\n* Experience in at least one primary scripting language (Bash, Python, PowerShell)\n* Excellent communication, organizational, and problem-solving skills in a dynamic environment\n* Effective documentation skills, to include technical diagrams and written descriptions\n* Ability to work as part of a team with professional attitude and demeanor\n\n\n\nBonus Points\n* Previous experience in a consulting role within dynamic, and fast-paced environments\n* Previous experience supporting a 24x7x365 highly available environment for a SaaS vendor\n* Experience supporting security and/or infrastructure incident handling and investigation, and/or system scenario re-creation\n* Experience working within container orchestration solutions such as Kubernetes, Docker, EKS and/or ECS\n* Experience working within an automated CI/CD pipeline for release development, testing, remediation, and deployment\n* Cloud-based networking experience (Palo Alto, Cisco ASAv, etc.โฆ)\n* Familiarity with frameworks such as FedRAMP, FISMA, SOC, ISO, HIPAA, HITRUST, PCI, etc.\n* Familiarity with configuration baseline standards such as CIS Benchmarks & DISA STIG\n* Knowledge of encryption technologies (SSL, encryption, PKI)\n* Experience with diagramming (Visio, Lucid Chart, etc.) \n* Application development experience for cloud-based systems\n\n\n\n\n\nWhy You'll Want to Join Us\n\n\nAt Coalfire, youโll find the support you need to thrive personally and professionally. In many cases, we provide a flexible work model that empowers you to choose when and where youโll work most effectively โ whether youโre at home or an office. \nRegardless of location, youโll experience a company that prioritizes connection and wellbeing and be part of a team where people care about each other and our communities. Youโll have opportunities to join employee resource groups, participate in in-person and virtual events, and more. And youโll enjoy competitive perks and benefits to support you and your family, like paid parental leave, flexible time off, certification and training reimbursement, digital mental health and wellbeing support membership, and comprehensive insurance options. \n\n\nAt Coalfire, equal opportunity and pay equity is integral to the way we do business. A reasonable estimate of the compensation range for this role is $95,000 to $110,000 based on national salary averages. The actual salary offer to the successful candidate will be based on job-related education, geographic location, training, licensure and certifications and other factors. You may also be eligible to participate in annual incentive, commission, and/or recognition programs.All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. \n \n#LI-REMOTE \n#LI-JB1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, DevOps, Cloud, Junior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nUnited States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Lead Software Development Engineer Test Investigator
\nBy making evidence the heart of security, we help customers stay ahead of ever-changing cyber-attacks. \n\nCorelight is a cybersecurity company that transforms network and cloud activity into evidence. Evidence that elite defenders use to proactively hunt for threats, accelerate response to cyber incidents, gain complete network visibility and create powerful analytics using machine-learning and behavioral analysis tools. Easily deployed, and available in traditional and SaaS-based formats, Corelight is the fastest-growing Network Detection and Response (NDR) platform in the industry. And we are the only NDR platform that leverages the power of Open Source projects in addition to our own technology to deliver Intrusion Detection (IDS), Network Security Monitoring (NSM), and Smart PCAP solutions. We sell to some of the most sensitive, mission critical large enterprises and government agencies in the world.\n\nAs the Lead Software Development Engineer in Test (SDET), you will play a pivotal role in ensuring the quality and reliability of our software products by leading the development of automated testing frameworks for application and performance testing. Your expertise in Python, AWS, and knowledge of network security and Zeek will be instrumental in designing, building, and maintaining robust testing solutions. Additionally, you will be responsible for diagnosing and resolving production issues efficiently to minimize downtime and ensure a seamless user experience.\n\n\nResponsibilities\n\n\n* Lead the design, development, and implementation of automated testing frameworks for application and performance testing.\n\n* Collaborate with cross-functional teams to define testing strategies and requirements, ensuring comprehensive test coverage.\n\n* Utilize your proficiency in Python programming language to develop and maintain test scripts, ensuring the accuracy and reliability of automated tests.\n\n* Leverage AWS services and resources to optimize test environments and infrastructure for scalability, reliability, and efficiency.\n\n* Apply your knowledge of network security principles and technologies to implement effective security testing strategies.\n\n* Diagnose and troubleshoot production issues promptly to identify root causes and minimize downtime.\n\n* Provide timely fixes and patches to address production issues, collaborating with development and operations teams as needed.\n\n* Drive the continuous improvement of testing processes, tools, and methodologies to enhance efficiency and effectiveness.\n\n* Stay updated on industry best practices, emerging technologies, and trends in software testing and development.\n\n* Provide technical leadership and guidance to the testing team, mentoring junior members and fostering a culture of excellence.\n\n\n\n\nMinimum Qualifications\n\n\n* Strong appreciation and support for our core values: low ego results, tireless service, and applied curiosity.\n\n* Proven experience (6+ years) in software development and testing, with a focus on automation.\n\n* Proficiency in Python programming language.\n\n* Strong knowledge and hands-on experience with AWS services and cloud infrastructure.\n\n* Familiarity with performance testing tools and methodologies.\n\n\n\n\nPreferred Qualifications\n\n\n* Experience using Docker, Kubernetes, and containerized microservices.\n\n* Knowledge of Relational and NoSQL databases.\n\n* Experience adopting & using Agile development methodologies\n\n* Excellent communication skills. You thrive by collaborating with multiple teams and use your communication skills to influence product directions.\n\n* Bachelor's degree in Computer Science or related fields, or equivalent experience\n\n\n\n\nWe are proud of our culture and values - driving diversity of background and thought, low-ego results, applied curiosity and tireless service to our customers and community. Corelight is committed to a geographically dispersed yet connected employee base with employees working from home and office locations around the world. Fueled by an accelerating revenue stream, and investments from top-tier venture capital organizations such as Crowdstrike, Accel and Insight - we are rapidly expanding our team. \n\nCheck us out at www.corelight.com\n\nNotice of Pay Transparency:\nThe compensation for this position ranges from $180,000 - $218,000/year and may vary depending on factors such as your location, skills and experience. Depending on the nature and seniority of the role, a percentage of compensation may come in the form of a commission-based or discretionary bonus. Equity and additional benefits will also be awarded. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Testing, Cloud, NoSQL, Junior and Engineer jobs that are similar:\n\n
$60,000 — $97,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe currently seeking a Senior Data Engineer with 5-7 yearsโ experience. The ideal candidate would have the ability to work independently within an AGILE working environment and have experience working with cloud infrastructure leveraging tools such as Apache Airflow, Databricks, DBT and Snowflake. A familiarity with real-time data processing and AI implementation is advantageous. \n\n\n\nResponsibilities:\n* Design, build, and maintain scalable and robust data pipelines to support analytics and machine learning models, ensuring high data quality and reliability for both batch & real-time use cases.\n* Design, maintain, optimize data models and data structures in tooling such as Snowflake and Databricks. \n* Leverage Databricks for big data processing, ensuring efficient management of Spark jobs and seamless integration with other data services.\n* Utilize PySpark and/or Ray to build and scale distributed computing tasks, enhancing the performance of machine learning model training and inference processes.\n* Monitor, troubleshoot, and resolve issues within data pipelines and infrastructure, implementing best practices for data engineering and continuous improvement.\n* Diagrammatically document data engineering workflows. \n* Collaborate with other Data Engineers, Product Owners, Software Developers and Machine Learning Engineers to implement new product features by understanding their needs and delivery timeously. \n\n\n\nQualifications:\n* Minimum of 3 years experience deploying enterprise level scalable data engineering solutions.\n* Strong examples of independently developed data pipelines end-to-end, from problem formulation, raw data, to implementation, optimization, and result.\n* Proven track record of building and managing scalable cloud-based infrastructure on AWS (incl. S3, Dynamo DB, EMR). \n* Proven track record of implementing and managing of AI model lifecycle in a production environment.\n* Experience using Apache Airflow (or equivalent) , Snowflake, Lucene-based search engines.\n* Experience with Databricks (Delta format, Unity Catalog).\n* Advanced SQL and Python knowledge with associated coding experience.\n* Strong Experience with DevOps practices for continuous integration and continuous delivery (CI/CD).\n* Experience wrangling structured & unstructured file formats (Parquet, CSV, JSON).\n* Understanding and implementation of best practices within ETL end ELT processes.\n* Data Quality best practice implementation using Great Expectations.\n* Real-time data processing experience using Apache Kafka Experience (or equivalent) will be advantageous.\n* Work independently with minimal supervision.\n* Takes initiative and is action-focused.\n* Mentor and share knowledge with junior team members.\n* Collaborative with a strong ability to work in cross-functional teams.\n* Excellent communication skills with the ability to communicate with stakeholders across varying interest groups.\n* Fluency in spoken and written English.\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, Senior, Junior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nVerana Health, a digital health company that delivers quality drug lifecycle and medical practice insights from an exclusive real-world data network, recently secured a $150 million Series E led by Johnson & Johnson Innovation โ JJDC, Inc. (JJDC) and Novo Growth, the growth-stage investment arm of Novo Holdings. \n\nExisting Verana Health investors GV (formerly Google Ventures), Casdin Capital, and Brook Byers also joined the round, as well as notable new investors, including the Merck Global Health Innovation Fund, THVC, and Breyer Capital.\n\nWe are driven to create quality real-world data in ophthalmology, neurology and urology to accelerate quality insights across the drug lifecycle and within medical practices. Additionally, we are driven to advance the quality of care and quality of life for patients. DRIVE defines our internal purpose and is the galvanizing force that helps ground us in a shared corporate culture. DRIVE is: Diversity, Responsibility, Integrity, Voice-of-Customer and End-Results. Click here to read more about our culture and values. \n\n Our headquarters are located in San Francisco and we have additional offices in Knoxville, TN and New York City with employees working remotely in AZ, CA, CO, CT, FL, GA, IL, LA, MA, NC, NJ, NY, OH, OR, PA, TN, TX, UT , VA, WA, WI. All employees are required to have permanent residency in one of these states. Candidates who are willing to relocate are also encouraged to apply. \n\nJob Title: Data Engineer\n\nJob Intro:\n\nAs a Data/Software Engineer at Verana Health, you will be responsible for extending a set of tools used for data pipeline development. You will have strong hands-on experience in design & development of cloud services. Deep understanding of data quality metadata management, data ingestion, and curation. Generate software solutions using Apache Spark, Hive, Presto, and other big data frameworks. Analyzing the systems and requirements to provide the best technical solutions with regard to flexibility, scalability, and reliability of underlying architecture. Document and improve software testing and release processes across the entire data team.\n\nJob Duties and Responsibilities:\n\n\nArchitect, implement, and maintain scalable data architectures to meet data processing and analytics requirements utilizing AWS and Databricks\n\nAbility to troubleshoot complex data issues and optimize pipelines taking into consideration data quality, computation and cost.\n\nCollaborate with cross-functional teams to understand and translate data needed into effective data pipeline solutions\n\nDesign solutions to solving problems related to ingestion and curation of highly variable data structures in a highly concurrent cloud environment.\n\nRetain metadata for tracking of execution details to reproducibility and providing operational metrics.\n\nCreate routines to add observability and alerting to the health of pipelines.\n\nEstablish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.\n\nResearch , perform proof-of-concept and leverage performant database technologies(like Aurora Postgres, Elasticsearch, Redshift) to support end user applications that need sub second response time.\n\nParticipate in code reviews.\n\nProactive in staying updated with industry trends and emerging technologies in data engineering.\n\nDevelopment of data services using RESTful APIโs which are secure(oauth/saml), scalable(containerized using dockers), observable (using monitoring tools like datadog, elk stack), documented using OpenAPI/Swagger by using frameworks in python/java and automated CI/CD deployment using Github actions.\n\nDocument data engineering processes , architectures, and configurations.\n\n\n\n\nBasic Requirements:\n\n\nA minimum of a BS degree in computer science, software engineering, or related scientific discipline.\n\nA minimum of 3 years of experience in software development\n\nStrong programming skills in languages such as Python/Pyspark, SQL\n\nExperience with Delta lake, Unity Catalog, Delta Sharing, Delta live tables(DLT)\n\nExperience with data pipeline orchestration tools - Airflow, Databricks Workflows\n\n1 year of experience working in AWS cloud computing environment, preferably with Lambda, S3, SNS, SQS\n\nUnderstanding of Data Management principles(governance, security, cataloging, life cycle management, privacy, quality)\n\nGood understanding of relational databases.\n\nDemonstrated ability to build software tools in a collaborative, team oriented environment that are product and customer driven.\n\nStrong communication and interpersonal skills\n\nUtilizes source code version control.\n\nHands-on experience with Docker containers and container orchestration.\n\n\n\n\nBonus:\n\n\nHealthcare and medical data experience is a plus.\n\nAdditional experience with modern compiled programming languages (C++, Go, Rust)\n\nExperience building HTTP/REST APIs using popular frameworks\n\nBuilding out extensive automated test suites\n\n\n\n\nBenefits:\n\nWe provide health, vision, and dental coverage for employees\n\n\n\n\n\n\n\nVerana pays 100% of employee insurance coverage and 70% of family\n\nPlus an additional monthly $100 individual / $200 HSA contribution with HDHP\n\n\n\n\n\n\n\n\nSpring Health mental health support\n\nFlexible vacation plans\n\nA generous parental leave policy and family building support through the Carrot app\n\n$500 learning and development budget\n\n$25/wk in Doordash credit\n\nHeadspace meditation app - unlimited access\n\nGympass - 3 free live classes per week + monthly discounts for gyms like Soulcycle\n\n\n\n\nFinal note:\n\nYou do not need to match every listed expectation to apply for this position. Here at Verana, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.\n\n \n\n \n\n \n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Docker, Testing, Cloud and Engineer jobs that are similar:\n\n
$70,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWhat Youโll Do:\n\nWeโre looking for a talented and intensely curious Senior AWS Cloud Engineer who is nimble and focused with a startup mentality. In this newly created role you will be the liaison between data engineers, data scientists and analytics engineers. You will work to create cutting-edge architecture that provides the increased performance, scalability and concurrency for Data Science and Analytics workflows.\n\n Responsibilities\n\n\n* Provide AWS Infrastructure support and Systems Administration in support of new and existing products implemented thru: IAM, EC2, S3, AWS Networking (VPC, IGW, NGW, ALB, NLB, etc.), Terraform, Cloud Formation templates and Security: Security Groups, Guard Duty, Cloud Trail, Config and WAF.\n\n* Monitor and maintain production, development, and QA cloud infrastructure resources for compliance with all Six Pillars of AWS Well-Architected Framework - including Security Pilar.\n\n* Develop and maintain Continuous Integration (CI) and Continuous Deployment (CD) pipelines needed to automate testing and deployment of all production software components as part of a fast-paced, agile Engineering team. Technologies required: ElastiCache, Bitbucket Pipelines, Github, Docker Compose, Kubernetes, Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS) and Linux based server instances.\n\n* Develop and maintain Infrastructure as Code (IaC) services for creation of ephemeral cloud-native infrastructure hosted on Amazon Web Services (AWS) and Google Cloud Platform (GCP). Technologies required: AWS AWS Cloud Formation, Google Cloud Deployment Manager, AWS SSM, YAML, JSON, Python.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 99.99% uptime. Technologies required: AWS IAM, AWS Cloud Watch, AWS Event Bridge, AWS SSM, AWS SQS, AWS SNS, AWS Lambda and Step Functions, Python, Java, RDS Postgres, RDS MySQL, AWS S3, Docker, AWS Elasticsearch, Kibana, AWS Amplify.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 100% cybersecurity compliance and surveillance. Technologies required: AWS SSM, YAML, JSON, Python, RDS Postgres, Tenable, CrowdStrike EPP, Sophos EPP, Wiz CSPM, Linux Bash scripts.\n\n* Design and code technical solutions that improve the scalability, performance, and reliability of all Data Acquisition pipelines. Technologies required: Google ADs APIs, Youtube Data APIs, Python, Java, AWS Glue, AWS S3, AWS SNS, AWS SQS, AWS KMS, AWS RDS Postgres, AWS RDS MySQL, AWS Redshift.\n\n* Monitor and remediate server and application security events as reported by CrowdStrike EPP, Tenable, WIZ CSPM, Invicti\n\n\n\n\nWho you are:\n\n\n* Minimum of 5 years of System Administration or Devops Engineering experience on AWS\n\n* Track record of success in System Administration, including System Design, Configuration, Maintenance, and Upgrades\n\n* Excels in architecting, designing, developing, and implementing cloud native AWS platforms and services.\n\n* Knowledgeable in managing cloud infrastructure in a production environment to ensure high availability and reliability.\n\n* Proficient in automating system deployment, operation, and maintenance using Infrastructure as Code - Ansible, Terraform, CloudFormation, and other common DevOps tools and scripting.\n\n* Experienced with Agile processes in a structured setting required; Scrum and/or Kanban.\n\n* Security and compliance standards experience such as PCI and SOC as well as data privacy and protection standards, a big plus.\n\n* Experienced in implementing Dashboards and data for decision-making related to team and system performance, rely heavily on telemetry and monitoring.\n\n* Exceptional analytical capabilities.\n\n* Strong communication skills and ability to effectively interact with Engineering and Business Stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Bachelor's degree in technology, engineering, or related field\n\n* AWS Certifications โ Solutions Architect, DevOps Engineer etc.\n\n\n\n\nWhy Spotter:\n\n\n* Medical and vision insurance covered up to 100%\n\n* Dental insurance\n\n* 401(k) matching\n\n* Stock options\n\n* Complimentary gym access\n\n* Autonomy and upward mobility\n\n* Diverse, equitable, and inclusive culture, where your voice matters\n\n\n\n\nIn compliance with local law, we are disclosing the compensation, or a range thereof, for roles that will be performed in Culver City. Actual salaries will vary and may be above or below the range based on various factors including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. A reasonable estimate of the current pay range is: $100-$500K salary per year. The range listed is just one component of Spotterโs total compensation package for employees. Other rewards may include an annual discretionary bonus and equity. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, Testing, DevOps, Cloud, Senior, Engineer and Linux jobs that are similar:\n\n
$50,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLos Angeles, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nHazel partners with schools and families to provide physical and mental virtual health care that helps students feel better and get back to learning. As telehealth becomes more and more relevant in the lives of children, Hazel is experiencing tremendous company growth. Our innovative response to our nationโs call for equitable, affordable, and safe virtual access to healthcare has been recognized by Fast Company as โone of the worldโs most innovative places to workโ in 2023. \n\nHelping students and their families feel better takes a team of smart, dedicated people. As an integral member of the Hazel team, you willโฆ\n\n\n* Make an Impact: Work with a team that is increasing equitable access of quality health care experiences for students and their families\n\n* Enable Scale: Work with a team that is building and professionalizing a high growth high impact social enterprise\n\n* Feel Valued: Work with a team that is being compensated competitively, developed professionally, and celebrated frequently for making a meaningful difference\n\n\n\n\nCheck us out at Hazel Health Careers.\n\nThe Role: Senior Infrastructure Engineer\n\nLocation: Remote, San Francisco\n\nAbout This Role:\n\nAs a Senior Infrastructure Engineer Engineer, you will work with server engineers to translate engineering needs into solutions that directly address fundamental goals of security, reliability, scalability, performance, and extendability. You will also help with automated processes and tooling in order to help the server engineers work more efficiently. \n\nPrimary Responsibilities:\n\n\n* Build user access configuration for data stores/servers/cloud.\n\n* Automate testing per commit, linting, and coverage.\n\n* Automate release management (deployments and rollbacks), and integrate with monitoring. \n\n* Work with iOS and web to understand roll-forward and roll-back safety.\n\n* Build and manage infrastructure as code for our cloud solutions, such as for Heroku, AWS, and Splunk\n\n* Transition custom tooling to standardized solutions\n\n\n\n\nYour Background:\n\nWe are looking for diverse individuals who want to support our mission and values. Please consider applying even if you don't fully meet 100% of these criteria. \n\n\n* 5-7+ years of DevOps-specific experience. Including:\n\n\n\n* Public clouds, such as AWS, GCP, or Azure as your production environment\n\n* Databases, such as Postgres, RDS, Cloud SQL, Aurora, etc.\n\n* Linux systems admin, network, configuration management, etc.\n\n\n\n* Expert in infrastructure automation using TerraForm, Ansible, or similar tools\n\n* Expert in Git or similar distributed revision control systems\n\n* Expert in reading, writing, and maintaining scripts (shell, python, etc.)\n\n* Experience with continuous integration/delivery processes, principles, and tools\n\n* Experience with log management and analytics such as Splunk and SignalFx\n\n* Experience securing cloud infrastructure and systems in a regulated environment (e.g., finance, healthcare, gov). \n\n* Experience with Kubernetes and/or Docker \n\n* Experience with highly available and scalable SaaS solutions\n\n* Demonstrated excellent written and verbal communication skills -- able to communicate technical issues to non-technical and technical audiences\n\n* Tolerance for ambiguity - highly effective in ambiguous environments and demonstrated ability to develop clear strategies and plans with appropriately managed risks\n\n\n\n\nBonus points for experience with:\n\n\n* Java, or Heroku \n\n* Scalability \n\n* Site Reliability experience \n\n* Experience with the security policies and practices of HIPAA/HITECH-compliant infrastructure or in similar regulated environments\n\n\n\n\nTotal compensation for this role is market competitive, with a base salary range of $185,000 to $215,000, management bonus, a 401k match, healthcare coverage, paid-time off, and a broad range of other benefits and perks. Peruse our benefits at Hazel Health Benefits.\n\nOur Hiring Process:\n\nAt Hazel, we value your time and aim to run a hiring process that takes no more than 4 weeks, involving interview activities customized for each role and requisite skill set. We understand that interviewing for a new job can be a big change and the Hazel Recruitment Team is excited to guide you through this process.\n\nWe believe talent is everywhere, and so is opportunity. While we have physical offices in San Francisco and Dallas, we have embraced working remotely throughout the United States. While some roles may require proximity to our San Francisco or Dallas offices, remote roles can sit in any of the following states: AZ, CA, CO, DC, DE, FL, GA, HI, IL, ME, MD, MA, MI, MO, NE, NV, NJ, NM, NY, NC, OR, PA, SC, TN, TX, VT, VA, WA and WI. Please only apply if you live and work full-time in one of the states listed above or plan to relocate to one of these states before starting your employment with Hazel. State locations and specifics are subject to change as our hiring requirements shift.\n\nWe are committed to creating a diverse, inclusive and equitable workplace. Hazel Health values the minds, experiences and perspectives of people from all walks of life. We are proud to value diversity and be an equal opportunity employer. Qualified candidates with arrest and conviction records will be considered for employment in accordance with the Fair Hiring laws. Learn more about working with us at Hazel Health Life. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, Testing, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.