About Phaidra\n\nPhaidra is building the future of industrial automation.\n\nThe world today is filled with static, monolithic infrastructure. Factories, power plants, buildings, etc. operate the same they've operated for decades โ because the controls programming is hard-coded. Thousands of lines of rules and heuristics that define how the machines interact with each other. The result of all this hard-coding is that facilities are frozen in time, unable to adapt to their environment while their performance slowly degrades.\n\nPhaidra creates AI-powered control systems for the industrial sector, enabling industrial facilities to automatically learn and improve over time. Specifically:\n\n\n* We use reinforcement learning algorithms to provide this intelligence, converting raw sensor data into high-value actions and decisions.\n\n* We focus on industrial applications, which tend to be well-sensorized with measurable KPIs โ perfect for reinforcement learning.\n\n* We enable domain experts (our users) to configure the AI control systems (i.e. agents) without writing code. They define what they want their AI agents to do, and we do it for them.\n\n\n\n\nOur team has a track record of applying AI to some of the toughest problems. From achieving superhuman performance with DeepMind's AlphaGo, to reducing the energy required to cool Google's Data Centers by 40%, we deeply understand AI and how to apply it in production for massive impact.\n\nPhaidra is based in the USA but 100% remote; we do not have a physical office. We hire employees internationally with the help of our partner, OysterHR. Our team is currently located throughout the USA, Canada, UK, Norway, Italy, Spain, Portugal, and India.\n\n**Please only apply to one opening. If you are a better fit for another opening, our team will move your application. Candidates who apply to multiple openings will not be considered.**\nWho You Are\n\nWe are looking for a very experienced Software Engineer with a focus on MLOps tech leadership to be a part of our growing AI Platform team. You are bold and creative, and have deep empathy for customers. You will design and implement significant parts of the code base and will have the opportunity to make an immediate impact with your work and guide the product and team as we grow.\n\nYou are curious and like to understand technologies and their tradeoffs in depth - providing technical guidance to the team and peers as and when required. Leading by example, you have accumulated a wealth of insights and experiences from your hands-on involvement in the field, and you are committed to rolling up your sleeves and getting work done. You like joining and supporting other engineers in their work to learn from them as well as letting them benefit from your expertise and experience.\n\nYou have the motivation and skills to identify technical product needs, initiate projects and owning their delivery, including the involvement of engineering peers as needed. You are comfortable with challenging the status quo respectfully to drive and deliver technical excellence in the team.\n\n\n* We are seeking a team member located within one of the following areas: USA/Canada/UK/EU\n\n\n\nResponsibilities\n\nThe AI Platform team you are joining is responsible for building the core platform that powers model training, inference and decision making in our products. Furthermore the team owns MLOps and the services hosting our AI capabilities. Productionizing results from Research, as well as extending our systems and providing support according to our customer needs fall into team responsibilities as well. You will join this team as a very experienced engineer with a focus on MLOps solutions to grow our expertise in that area, but also contribute as a software engineer more widely in the team.\n\nAs an organization, we strongly believe in expertise across the stack. As such, you will experience flavors of Machine Learning, Software Engineering, Distributed Systems, MLOps and DevOps.\n\nIn particular, you will:\n\n\n* Design, build and lead the MLOps initiatives and vision for the AI Platform to strengthen automation, orchestration, versioning, observability, monitoring and collaboration for the platform.\n\n* Build and design scalable components for the AI Platform to allow high throughput training and inference for RL agents doing realtime inference for autonomous control of industrial systems.\n\n* Contribute to the design and implementation of the product backend by writing REST & gRPC API services and scalable event-driven backend applications.\n\n* Design clear, extensible software interfaces for the team's customers and maintain a high release quality bar.\n\n* Perform DevOps duties of CI/CD, Release & Deployment management.\n\n* Be a part of our global production oncall team and, own & operate your services in production, meeting Phaidraโs high bar for operational excellence.\n\n* Lead cross-functional initiatives collaborating with engineers, product managers and TPM across teams.\n\n* Mentor your peers and be a technical role-model in the team.\n\n\n\nOnboarding\n\nIn your first 30 daysโฆ\n\n\n* You will be immersed in an onboarding program that introduces you to Phaidra and our product.\n\n* You will spend time in the Engineering org, learning how the teams operate, interact, and approach problems.\n\n* You will read various parts of our handbook and familiarize yourself with the documentation culture at Phaidra.\n\n* You will set up your development environment and start working on an onboarding exercise that will introduce you to various parts of our code base.\n\n* You will learn about how we use agile and be able to navigate our sprint boards and backlogs.\n\n* You will learn about various team standards and development & release processes.\n\n* You will start to learn about our system architecture and infrastructure.\n\n* You will start picking up few good โfirst-tasksโ to get yourself accustomed to the end to end release flow.\n\n\n\n\nIn your first 60 daysโฆ\n\n\n* You will get a solid understanding of what Phaidra does and how we do it.\n\n* You will meet with team members across Phaidra and started building relationships that will help you be successful at your job.\n\n* You will complete the onboarding exercise and will be on your way to completing your first production task.\n\n* You will take ownership for the MLOps work on the team, identify gaps and propose roadmap items on the topic.\n\n\n\n\nIn your first 90 daysโฆ\n\n\n* You will be fully integrated in the team and with team members across the company.\n\n* You will have a more in-depth understanding of our system architecture and infrastructure.\n\n* You will complete your first on-call experience helping monitor and improve our production environments.\n\n* You will become an expert with our tooling.\n\n* You will start to contribute to knowledge sharing throughout Phaidra and the team.\n\n* You will take proactively drive MLOps topics in the team and represent it technically throughout the company.\n\n\n\nKey Qualifications\n\n\n* 10+ years of work experience.\n\n* Proven record on impact as a Tech Leader and bar-raiser for ambitious Software Engineering teams\n\n* Strong experience on designing and implementing MLOps solutions for AI production systems\n\n* Extensive experience with platform Software Engineering with the ability to contribute on all levels as an individual contributor and tech leader\n\n* Strong expertise on building, operating and monitoring large scale multi-tenant systems with high availability, fault tolerance, performance tuning, monitoring, and metrics collection\n\n* Ability to take ownership of realtime production systems - aligning technical with business requirements, raising the bar for operational excellence and on-call incident handling\n\n* Strong expertise in Python and Cloud environments\n\n* Very good grasp of Machine Learning (especially Deep Learning) fundamentals\n\n* Ability to collaborate and communicate effectively in an all-remote setting\n\n* Doing your work with curiosity, ownership, transparency & directness, outcome orientation, and customer empathy.\n\n\n\nBonus\n\n\n* Experience with building applications that can be deployed in cloud, as well as in hybrid or on-prem environment\n\n* Exposure to Reinforcement Learning or other in-depth knowledge on modern ML applications\n\n* Experience with industrial applications, industrial control systems, IoT, sensor time series applications, or similar\n\n\n\nRelevant Technologies from our Stack\n\n\n* Python, Go\n\n* PyTorch, PyTorch Lightning\n\n* Ray.io, Prefect, mlflow\n\n* REST & gRPC micro-services\n\n* Docker, Kubernetes, Terraform & Kapitan\n\n* GCP - GKE, PubSub, CloudSQL, BigTable, Postgres, etc.\n\n* Grafana Cloud, Prometheus\n\n* Poetry, Pants\n\n* Gitlab CI, ArgoCD, Atlantis\n\n\n\nGeneral Interview Process\n\nAll of our interviews are held via Google Meet, and an active camera connection is required.\n\n* \n\nMeeting with Operations (30 minutes): The purpose of this interview is to meet you, learn more about your background, discuss what you are looking for in a new position and cover formalities around your application.\n\n\n* \n\nTech Lead interview (60 minutes): This interview is a combination of technical and cultural fit assessment. We will cover your technical experience and the skills as an engineer and a tech lead while discussing projects that you have worked on in the past. You will meet the manager for the role as well as our VP of Engineering, with the opportunity to ask any questions about the team, role and engineering at Phaidra.\n\n\n* \n\nML system design & SRE (90 minutes): In this interview, we will go over a real world MLOps problem. You can expect to draw architecture diagrams using boxes & arrows in your browser. We will talk about system design, scalability and monitoring\n\n\n* \n\nML interview (60 minutes): This interview will focus on Machine Learning approaches, algorithms and theory. You will be asked about ML algorithms you are familiar with, how they work under the hood and how to use them in an applied setting.\n\n\n* \n\nCulture fit interview with Phaidraโs co-founders (30 minutes): This interview focuses on alignment with Phaidraโs values and the mutual cultural fit.\n\n\n\nBase Salary\n\n\n* US Residents: $156,000-$234,000/year\n\n* UK Residents: ยฃ108,000-ยฃ162,000/year\n\n\n\n\nSalary ranges for EU countries will vary based on the market rate for the location.\n\nThis position will also include equity.\n\nThese are best faith estimates of the base salary range for this position. Multiple factors such as experience, education, level, and location are taken into account when determining compensation.\nBenefits & Perks\n\n\n* Fast-paced and team-oriented environment where you will be instrumental in the direction of the company.\n\n* Phaidra is a 100% remote company with a digital nomad policy.\n\n* Competitive compensation & equity.\n\n* Outsized responsibilities & professional development.\n\n* Training is foundational; functional, customer immersion, and development training.\n\n* Medical, dental, and vision insurance (exact benefits vary by region).\n\n* Unlimited paid time off, with a minimum of 20 days off per year requirement.\n\n* Paid parental leave (exact benefits vary by region).\n\n* Home office setup allowance and company MacBook.\n\n* Monthly remote work stipend.\n\n\n\nOn being Remote\n\nWe are thoughtful about remote collaboration. We look to the pioneers - like Gitlab - for inspiration and best practices to create a stellar remote work environment. We have a documentation-first culture and actively practice asynchronous communication in everything we do. Our team stays connected through tools like Slack and video chat. Most teams meet daily, and we have dedicated all-hands meetings bi-weekly to build strong relationships. We hold virtual team building events once per month - and even hold virtual socials to watch rocket launches! We have a yearly in-person, all-company summit in locations like Seattle, Athens, Goa, and Barcelona.\nEqual Opportunity Employment\n\nPhaidra is an Equal Opportunity Employer; employment with Phaidra is governed on the basis of merit, competence, and qualifications and will not be influenced in any manner by race, color, religion, gender, national origin/ethnicity, veteran status, disability status, age, sexual orientation, gender identity, marital status, mental or physical disability, or any other legally protected status. We welcome diversity and strive to maintain an inclusive environment for all employees. If you need assistance with completing the application process, please contact us at [email protected].\nE-Verify Notice\n\nPhaidra participates in E-Verify, an employment authorization database provided through the U.S. Department of Homeland Security (DHS) and Social Security Administration (SSA). As required by law, we will provide the SSA and, if necessary, the DHS, with information from each new employeeโs Form I-9 to confirm work authorization for those residing in the United States.\n\nAdditional information about E-Verify can be found here.\n\n#LI-Remote\n\nWE DO NOT ACCEPT APPLICATIONS FROM RECRUITERS.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Cloud, API, Engineer and Backend jobs that are similar:\n\n
$70,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSeattle, Washington, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Who You Are\n\nWe are looking for a driven Software Engineer (MLOps) to be a part of our growing AI Platform team. You are bold and creative, and have deep empathy for customers who may not be tech-savvy. You will design and implement significant parts of the code base and will have the opportunity to make an immediate impact with your work and guide the product and team as we grow.\n\nYou are curious and like to understand technologies and their tradeoffs in depth - providing technical guidance to the team and peers as and when required. Leading by example, you have accumulated a wealth of insights and experiences from your hands-on involvement in the field, and you are committed to rolling up your sleeves and getting work done. You like joining and supporting other engineers in their work to learn from them as well as letting them benefit from your expertise and experience.\n\nYou have the motivation and skills to identify technical product needs, initiate projects and owning their delivery, including the involvement of engineering peers as needed. You are comfortable with challenging the status quo respectfully to drive and deliver technical excellence in the team.\n\n**We are seeking a team member located within one of the following areas: USA/Canada/UK\nResponsibilities\n\nThe AI Platform team you are joining is responsible for building the core platform that powers model training, inference and decision making in our products. Furthermore the team owns MLOps and the services hosting our AI capabilities. Productionizing results from Research, as well as extending our systems and providing support according to our customer needs fall into team responsibilities as well. You will join this team as an experienced engineer with a focus on MLOps solutions to grow our expertise in that area, but also contribute as a software engineer more widely in the team.\n\nAs an organization, we strongly believe in expertise across the stack. As such, you will experience flavors of Machine Learning, Software Engineering, Distributed Systems, MLOps and DevOps.\n\nIn particular, you will:\n\n\n* Design, build and lead the MLOps initiatives and vision for the AI Platform to strengthen automation, orchestration, versioning, observability, monitoring and collaboration for the platform.\n\n* Build and design scalable components for the AI Platform to allow high throughput training and inference for RL agents doing realtime inference for autonomous control of industrial systems.\n\n* Contribute to the design and implementation of the product backend by writing REST & gRPC API services and scalable event-driven backend applications.\n\n* Design clear, extensible software interfaces for the team's customers and maintain a high release quality bar.\n\n* Design and optimize data storage & retrieval mechanisms for high throughput, security & ease of access.\n\n* Perform DevOps duties of CI/CD, Release & Deployment management.\n\n* Be a part of our global production oncall team and, own & operate your services in production, meeting Phaidraโs high bar for operational excellence.\n\n* Lead cross-functional initiatives collaborating with engineers, product managers and TPM across teams.\n\n* Mentor your peers and be a technical role-model in the team.\n\n\n\nOnboarding\n\nIn your first 30 daysโฆ\n\n\n* You will be immersed in an onboarding program that introduces you to Phaidra and our product.\n\n* You will spend time in the Engineering org, learning how the teams operate, interact, and approach problems.\n\n* You will read various parts of our handbook and familiarize yourself with the documentation culture at Phaidra.\n\n* You will set up your development environment and start working on an onboarding exercise that will introduce you to various parts of our code base.\n\n* You will learn about how we use agile and be able to navigate our sprint boards and backlogs.\n\n* You will learn about various team standards and development & release processes.\n\n* You will start to learn about our system architecture and infrastructure.\n\n* You will start picking up few good โfirst-tasksโ to get yourself accustomed to the end to end release flow.\n\n\n\n\nIn your first 60 daysโฆ\n\n\n* You will get a solid understanding of what Phaidra does and how we do it.\n\n* You will meet with team members across Phaidra and started building relationships that will help you be successful at your job.\n\n* You will complete the onboarding exercise and will be on your way to completing your first production task.\n\n* You will take ownership for the MLOps work on the team, identify gaps and propose roadmap items on the topic.\n\n\n\n\nIn your first 90 daysโฆ\n\n\n* You will be fully integrated in the team and with team members across the company.\n\n* You will have a more in-depth understanding of our system architecture and infrastructure.\n\n* You will complete your first on-call experience helping monitor and improve our production environments.\n\n* You will become an expert with our tooling.\n\n* You will start to contribute to knowledge sharing throughout Phaidra and the team.\n\n* You will take proactively drive MLOps topics in the team and represent it technically throughout the company.\n\n\n\nKey Qualifications\n\n\n* 7+ years of work experience.\n\n* Bachelors or Masters in Computer Science, or equivalent experience.\n\n* Strong experience on designing and implementing MLOps solutions for AI production systems\n\n* Expertise with production Software Engineering - relational and non-relational data modelling, micro-services, understanding of event driven systems, etc.\n\n* Strong experience building large scale multi-tenant systems with high availability, fault tolerance, performance tuning, monitoring, and statistics/metrics collection.\n\n* Strong expertise in Python and Cloud environments\n\n* Good grasp of Machine Learning (especially Deep Learning) fundamentals.\n\n* Ability to collaborate and communicate effectively in an all-remote setting\n\n* Doing your work with curiosity, ownership, transparency & directness, outcome orientation, and customer empathy.\n\n\n\nBonus\n\n\n* Experience as a service owner of a realtime production system - operating & monitoring services in production, including using observability tooling such as Prometheus, Grafana, Tempo or equivalent offerings and incident management.\n\n* Experience with building applications that can be deployed in cloud, hybrid or on prem environments\n\n* Exposure to Reinforcement Learning\n\n\n\nOur Stack\n\n\n* Languages - (Backend) Python, Go; (Frontend) JavaScript/TypeScript, React; Customer SDK & Clients - C# .NET\n\n* PyTorch\n\n* Cypress\n\n* Docker, Kubernetes, Terraform & Kapitan\n\n* Gitlab CI, ArgoCD, Atlantis, Vercel\n\n* GCP - GKE, PubSub, CloudSQL, BigTable, Postgres, etc.\n\n* Ray.io\n\n* REST & gRPC micro-services\n\n* Poetry, Pantsbuild\n\n\n\nGeneral Interview Process\n\nAll of our interviews are held via Google Meet, and an active camera connection is required.\n\n* Initial screening interview with a People Operations team member (30 minutes): The purpose of this interview is to meet you, learn more about your background, and discuss what you are looking for in a new position.\n\n* Hiring manager interview (30 minutes): The purpose of this meeting is for you to get to know the manager for the role. This chat will mainly focus on your previous experience and technical background. You can expect to talk about projects that you have worked on in the past and ask any questions about the team & role.\n\n* Technical Interview 1 (60 minutes): The purpose of this interview is to assess your skills in Machine Learning and related mathematics.\n\n* Technical Interview 2 (90 minutes): In this interview, we will go over a real world MLOps problem. You can expect to draw architecture diagrams using boxes & arrows in your browser. We will talk about system design, scalability and monitoring.\n\n* Meeting with VP of Engineering (30 minutes): This interview is a combination of technical and cultural fit assessment. You will cover the technical experience and the skills that you brinand have an opportunity to ask any questions about the teamโs culture or vision.\n\n* Culture fit interview with Phaidraโs co-founders (30 minutes): This interview focuses on alignment with Phaidraโs values\n\n\nBase Salary\n\nUS Residents: $115,200-$208,800/year\n\nUK Residents: ยฃ96,400-ยฃ144,000/year\n\nThis position will also include equity.\n\nThese are best faith estimates of the base salary range for this position. Multiple factors such as experience, education, level, and location are taken into account when determining compensation. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Cloud, API, Senior, Engineer and Backend jobs that are similar:\n\n
$65,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSeattle, Washington, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nABOUT EIGENLABS\n\nEigenLabs provides cryptoeconomic security as a service for blockchain projects. Our platform provides programmatic access to the trust layer of Ethereum to make components reusable, allowing builders to rely on Ethereum for security while saving time and resources typically allocated to bootstrapping their own token. EigenLabs technologies also create technical value & efficiency for validators in Ethereumโs proof-of-stake network by enabling them to "re-stake" their assets. Re-staking allows validators to secure multiple protocols and boost rewards by providing security services to oracle networks, data availability networks and dApps running on rollups or side-chains. Weโre on a mission to hyperscale ETH & dApps and increase decentralization through re-staking!\n\nTHE ROLE\n\nAt EigenLabs, our engineering teams operate within a customer-aligned team structure, where we believe in shared responsibility for DevOps among all software engineers. As an Infrastructure Software Engineer, you will lead the design and implementation of critical infrastructure components while fostering a culture of collaboration and best practices adoption within your team, and across the company. \n\nYou will work closely with frontend and backend software engineers, leveraging your expertise to empower the entire team to collectively maintain, operate, and enhance infrastructure resources effectively. By championing leading practices, automation, and efficient workflows, you will ensure that our customer-aligned teams can deliver high-quality software solutions with speed, security, and reliability.\n\nYour leadership in infrastructure design, implementation, and reliability engineering will directly contribute to EigenLabs' commitment to blockchain innovation and advancing economic freedom through technology. If youโre passionate about shaping the future of blockchain, and making a meaningful impact on the world, we look forward to hearing from you!\n\nWHAT YOU WILL DO\n\n\n* You will lead the design and implementation of foundational infrastructure components used by every engineering team in production. This includes dynamic configuration, DNS and networking setup, secrets management, container orchestration (e.g., Kubernetes, including platforms such as EKS).\n\n* Demonstrate leadership by taking the initiative to address challenges and drive continuous improvement. Foster a culture of best practices adoption and automation within your team. Share your expertise to empower frontend and backend engineers to efficiently maintain, operate, and enhance infrastructure resources.\n\n* Utilize your proficiency in cloud infrastructure platforms such as AWS, GCP, or similar to optimize our infrastructure for scalability and performance.\n\n* Collaborate with cross-functional teams to identify and implement improvements in infrastructure, monitoring, and incident response.\n\n* Make significant contributions to operational excellence initiatives, ensuring the highest level of efficiency and reliability in our infrastructure. Administer network capabilities and support CI/CD pipelines. Monitor infrastructure using tools like Prometheus, Grafana, or similar. \n\n* Implement and maintain security best practices across all aspects of our infrastructure, including access controls, encryption, and network security\n\n* Participate in on-call rotations to ensure the continued reliability and uptime of our services\n\n* Design, implement and champion Continuous Delivery (CI/CD) principles to automate software development and deployment processes.\n\n* Articulate a long-term vision for maintaining and scaling our infrastructure, aligning it with our product and technical goals.\n\n* Build tools for blockchain node operators that make it easy to launch and operate different types of validator environments\n\n* Proactively contribute to discussions about technical issues, sprint and roadmap planning, and improving engineering processes\n\n\n\n\nWHAT YOU WILL BRING\n\n\n* 5+ years of direct experience in infrastructure, SRE or back-end engineering with public cloud and Linux-based systems.\n\n* Strong design and implementation experience with at least one major cloud platform (AWS preferred)\n\n* Knowledge of authentication, authorisation and accounting (IAM, Federation, RBAC, service accounts) for public cloud and kubernetes\n\n* Containerization and container orchestration with Docker and Kubernetes. Experience with container hardening and implementation within Kubernetes.\n\n* Proficiency in at least one back-end development language such as Python, Go, or C++\n\n* Expertise in Infrastructure as Code (IaC) tools such as Terraform (preferred), Ansible, etc., enabling efficient automation and configuration management.\n\n* Demonstrated experience in production environments with proactive monitoring, logging, alerting, and profiling with tools such as Prometheus, Grafana, ELK Stack (or alternative log analysis platforms) to ensure robust performance monitoring and troubleshooting capabilities.\n\n* Experience developing and maintaining CI systems (e.g., GitHub Actions, Jenkins, or CircleCI) to facilitate automation of the SDLC. Working knowledge of CD tooling (e.g., AWS Code Pipelines, ArgoCD, Flux), and an understanding of how GitOps deployment models operate\n\n* 3+ years experience with one or more programming languages, preferably Go or Python.\n\n* Knowledge of security best practices, encompassing encryption methods, key management, access control mechanisms, and network security protocols that contribute to the overall security posture of our infrastructure. \n\n* Strong understanding of core Internet protocols: DNS, TCP/SSL, HTTP, gRPC, etc.\n\n* Proven ability to take ownership of projects and work independently. Proficiency in effectively communicating project statuses and diligently documenting activities. Expertise in maintaining meticulous attention to detail across diverse, blockchain-centric environments.\n\n* Track record of successfully delivering complex and high-scale infrastructure\n\n\n\n\nNICE TO HAVES\n\n\n* Contribution to open source projects and/or developing open source tools is advantage\n\n* Data visualization, and observability tooling experience\n\n* Expertise in scaling and migrating systems in dynamic environments, with a strong understanding of incident management processes.\n\n* Experience in information security, including vulnerability assessments, penetration testing, and implementing and automating security controls, with expertise in security protocols (TLS, SSL, SSH) and frameworks (NIST, CIS, ISO 27001). Experience responding to security audits. \n\n* Service discovery and service mesh technologies such as Istio, Linkerd.\n\n* Good understanding of blockchain fundamentals, including wallets, smart contracts, protocol design\n\n* Experience with Liquid Staking or Ethereum Node Operations platforms\n\n\n\n\n \n\nIn compliance with local law, we are disclosing the compensation, or a range thereof, for roles in locations where legally required. $225,000 - $250,000 is the annual base salary. Other rewards may include annual bonuses, short- and long-term incentives, and program-specific awards. In addition, EigenLabs provides various employee benefits, including: \n\n\n* Employer-covered Medical, Dental, and Vision plans\n\n* 401k \n\n* Unlimited Paid Time Off\n\n* 12 weeks of fully paid maternity and paternity leave \n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Ethereum, Docker, Accounting, DevOps, Cloud, Node, Senior, Engineer and Backend jobs that are similar:\n\n
$57,500 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSeattle, Washington, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for StudentFinance and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 410 3 years ago
We are building a world-class team of mission-driven and entrepreneurial people to be based in Spain and Portugal, and are now looking for a DevOps Engineer to join our team. StudentFinance is officially based in Spain, but our team works remotely.\n\n\nIn this role you would work closely with the founding team to build features along our backend roadmap. The role requires a person that is both experienced with modern DevOps, as well as had previous exposure to startup/innovation environments. We are looking for someone that is keen to โget things doneโ, and can handle the responsibility of being a part of a fast growing startup, as well as having the ability to work independently and proactively.\n\nYour responsibilities:\n\n\n* Be responsible for the tech infrastructure, mostly based in AWS\n* Be responsible for the CI/CT pipelines and release processes\n* Collaborate in creating scalable architectures that enable the growth plans of the company.\n* Work within the tech team, but closely with operations and product teams as well, both for testing purposes as well as to have a clear grasp of the operational and business impact of the solutions you create.\n* Comply with tight data security and privacy standards.\n* Be an active stakeholder in tech architecture discussions around the features you'll build.\n* Be a proactive voice in proposing tools and processes that could make yourself and the team more efficient and productive.\n* Work cross-functionally to handle and resolve complex and time-sensitive challenges that are important for the company.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Engineer and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMadrid
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Unanet and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nAs a member of the newly-formed Cloud/DevOps group, you will help continue to define our transformation towards an enterprise SaaS solution, hosting numerous top-tier customers. We’re looking for an operations engineer with a development background who has operational experience and expertise spanning high-availability systems in both lower and production environments, a DevOps mentality of continuously improving the system, and a firm grasp on automation and cloud architectures. You must have extensive experience supporting applications developing in .NET, Java, and JavaScript. You should also be passionate about solving problems and developing creative solutions leveraging automation. \n\nYour First Three Months\n\nIn your first month, as your familiarity with the product grows, your responsibilities and influence will grow as well. You, along with your team, will be responsible for supporting the product team’s operational needs in our upper environments. Further, you will collaborate with other members of the Development and QA team in established patterns and continue to hone your skills as you start to formulate ways to push the design, architecture and implementation of our CI pipelines (lower and upper environments) to their next phase.\n\nWithin two months, you will fill in the gaps to have a well-tested, low-latency and highly available environment for our product operational needs. Working with the development team, you will start to implement out the gaps in creating and supporting a truly scalable product offering. You will be highly influential in the formation of the rest of our operations team as you help hire our next operations engineer. Your team will be responsible for supporting production environments.\n\nWithin three months, you will help drive changes to the operational and development roadmap as we continue onboarding new and existing customers into our hosted production environments in 2020. \n\nWhat You’ll Do\n\nDesign, provision, configure and maintain the operations platform to handle the scale of running several application stacks in the cloud that will be consumed by thousands of customers nationwide and our internal Product Team. Responsibilities will include:\n\n\n* Automating the deployment and maintenance of cloud platform technologies in both upper and lower environments\n\n* Implementing and overseeing log management, data warehouse, and database operations, including management of Logging/Audit services\n\n* Ensuring all monitoring systems (infrastructure- and application-level) are in place; report on availability\n\n* Designing and implementing strategies around disaster recovery and security for all sub-systems in infrastructure (e.g., web servers, database, queues, storage, network)\n\n* Aiding in improving the overall product through development task specific automation in the lower pipeline\n\n\n\n* Integrate static analysis tools in build pipeline (security, code quality, etc.)\n\n* Add database deployment capability to release pipeline (automate schema changes across all databases)\n\n* Incorporate test automation into build pipeline\n\n* Separate code from configuration in build/release pipeline\n\n\n\n* Researching and implementing emerging virtualization techniques and advising management around improved scalability\n\n* Building strategic and tactical plans for continued improvement of cloud architecture and operations\n\n* Performing capacity management, load and scalability planning\n\n* Helping drive process improvements for service management, including: outage/incident management, rollbacks and reporting\n\n* Assisting management in development and optimization of operational cost models\n\n* Assisting in the establishment of 24x7 performance monitoring, reporting and response protocols \n\n* With the help of your team and the development group, you will provide on-call support outside of normal work hours/days\n\n\n\n\nAbout You\n\n\n* You’re driven, humble, and autonomous\n\n* You’re a quick study, a strong communicator, and you’re able to adapt to fast-paced environments\n\n* You have a working knowledge of Agile Development practices (e.g., SCRUM, TDD)\n\n* You are (or have the mindset of) a developer, but are intrigued by the operational aspects of hosting developed solutions\n\n* You’re devoted to automation\n\n* You have 3-5 years of hands-on production experience with Amazon Web Services (AWS), Google Cloud or Microsoft Azure, including:\n\n\n\n* Configuration of VPCs, with VPN to corporate network\n\n* Experience setting up, maintaining and monitoring global production environments, QA and staging environments, with a strong understanding of the differing needs of such environments\n\n* 2+ years of experience in a professional production environment \n\n* 2+ years of experience managing networking infrastructure and monitoring at the application level\n\n\n\n* Performance optimization experience, including troubleshooting and resolving network and server latency issues, performing hardware evaluation/selection tasks, performance vs. cost vs. time analysis \n\n* At least 1 year of experience with automation or scripting tools (e.g., GO, Python, Shell, PowerShell) \n\n* 2+ years of experience with Ansible, Jenkins or other comparable tools\n\n* You’re detail-oriented, with excellent documentation skills, and you’re someone who can successfully manage multiple priorities\n\n* Troubleshooting skills that range from diagnosing hardware/software issues to large scale failures within a complex infrastructure \n\n\n\n\nOther Things We Hope You Have \n\n\n* Bachelor’s Degree in Computer Science or equivalent work experience\n\n* Experience with Relational Databases such as Oracle and Aurora, Splunk (or other log aggregation tools), Grafana, Terraform and Prometheus\n\n* Extensive production experience with MS Azure\n\n* Experience working with Docker, Kubernetes and hands-on experience with performance, load and security penetration testing \n\n* Hands-on experience with building out and maintaining a continuous integration and delivery pipeline\n\n\n\n\nOur Team \n\nYou will be a member of what will ultimately be a three-person team of Cloud Ops Engineers. You will report directly to our VP of Cloud Ops & Security, but will collaborate extensively with our Principal Cloud Ops Engineer, Director of Development and the rest of our Development Team.\n\nWe have an open and collaborative environment where everyone works together to deliver what is needed, from product features to operations needs (e.g., health checks). \n\nWe value open and direct communication, taking calculated risks that will push us forward, and investing in our people. \n\nOur Stack \n\n\n* We have current Production and Continuous Integration footprints in Azure and AWS\n\n* Our front-end applications leverage .Net, Vue.js and Java\n\n* Our APIs comprise of .NET and java\n\n* Our backend comprises of MS SQL Server, Oracle and AWS Aurora\n\n* We currently have a CI pipeline that we are looking to take to the next level to help with our growth in customers and employee base\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Senior, Engineer, DevOps, Amazon, Microsoft, SaaS and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Unanet and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nAs a founding member of the newly-formed Cloud/DevOps group, you will help continue to define our transformation towards an enterprise SaaS solution, hosting numerous top-tier customers. We’re looking for an operations engineer with a development background who has operational experience and expertise spanning high-availability systems in both lower and production environments, a DevOps mentality of continuously improving the system, and a firm grasp on automation and cloud architectures. You must have extensive experience supporting applications developing in .NET, Java, and JavaScript. You should also be passionate about solving problems and developing creative solutions leveraging automation. \n\nYour First Three Months\n\nIn your first month, as your familiarity with the product grows, your responsibilities and influence will grow as well. You, along with your team, will be responsible for supporting the product team’s operational needs in the upper environments. Further, you will collaborate with other members of the Development and QA team in established patterns and continue to hone your skills as you start to formulate ways to push the design, architecture and implementation of our CI pipelines (lower and upper environments) to their next phase.\n\nWithin two months, you will fill in the gaps to have a well-tested, low-latency and highly available environment for our product operational needs. Working with the development team, you will start to implement out the gaps in creating and supporting a truly scalable product offering. You will be highly influential in the formation of the rest of your operations team as you help hire our second operations engineer. Your team will be responsible for supporting production environments.\n\nWithin three months, you will help drive changes to the operational and development roadmap as we continue onboarding new and existing customers into our hosted production environments by mid-to-late 2020. \n\nWhat You’ll Do\n\nDesign, provision, configure and maintain the operations platform to handle the scale of running several application stacks in the cloud that will be consumed by thousands of customers nationwide and our internal Product Team. Responsibilities will include:\n\n\n* Automating the deployment and maintenance of cloud platform technologies in both upper and lower environments\n\n* Implementing and overseeing log management, data warehouse, and database operations, including management of Logging/Audit services\n\n* Ensuring all monitoring systems (infrastructure- and application-level) are in place; report on availability\n\n* Designing and implementing strategies around disaster recovery and security for all sub-systems in infrastructure (e.g., web servers, database, queues, storage, network)\n\n* Aiding in improving the overall product through development task specific automation in the lower pipeline\n\n\n\n* Integrate static analysis tools in build pipeline (security, code quality, etc.)\n\n* Add database deployment capability to release pipeline (automate schema changes across all databases)\n\n* Incorporate test automation into build pipeline\n\n* Separate code from configuration in build/release pipeline\n\n\n\n* Researching and implementing emerging virtualization techniques and advising management around improved scalability\n\n* Building strategic and tactical plans for continued improvement of cloud architecture and operations\n\n* Performing capacity management, load and scalability planning\n\n* Helping drive process improvements for service management, including: outage/incident management, rollbacks and reporting\n\n* Assisting management in development and optimization of operational cost models\n\n* Assisting in the establishment of 24x7 performance monitoring, reporting and response protocols \n\n* With the help of your team and the development group, you will provide on-call support outside of normal work hours/days\n\n\n\n\nAbout You\n\n\n* You’re driven, humble, and autonomous\n\n* You’re a quick study, a strong communicator, and you’re able to adapt to fast-paced environments\n\n* You have a working knowledge of Agile Development practices (e.g., SCRUM, TDD)\n\n* You are (or have the mindset of) a developer, but are intrigued by the operational aspects of hosting developed solutions\n\n* You’re devoted to automation\n\n* You have 4-8 years of hands-on production experience with Amazon Web Services (AWS), Google Cloud or Microsoft Azure, including:\n\n\n\n* Configuration of VPCs, with VPN to corporate network\n\n* Experience setting up, maintaining and monitoring global production environments, QA and staging environments, with a strong understanding of the differing needs of such environments\n\n* 3 - 4 years of experience in a professional production environment \n\n* 3 - 4 years of experience managing networking infrastructure and monitoring at the application level\n\n\n\n* Performance optimization experience, including troubleshooting and resolving network and server latency issues, performing hardware evaluation/selection tasks, performance vs. cost vs. time analysis \n\n* At least 1 year of experience with automation or scripting tools (e.g., GO, Python, Shell, PowerShell) \n\n* 2- 3 years of experience with Ansible, Jenkins or other comparable tools\n\n* You’re detail-oriented, with excellent documentation skills, and you’re someone who can successfully manage multiple priorities\n\n* Troubleshooting skills that range from diagnosing hardware/software issues to large scale failures within a complex infrastructure \n\n\n\n\nOther Things We Hope You Have \n\n\n* Bachelor’s Degree in Computer Science or equivalent work experience\n\n* Experience with Relational Databases such as Oracle and Aurora, Splunk (or other log aggregation tools), Grafana, Terraform and Prometheus\n\n* Extensive production experience with MS Azure\n\n* Experience working with Docker, Kubernetes and hands-on experience with performance, load and security penetration testing \n\n* Hands-on experience with building out and maintaining a continuous integration and delivery pipeline\n\n\n\n\nOur Team \n\nYou will be the founding member of what will ultimately be a three-person team of Cloud Ops Engineers. You will report directly to the SVP of Technical Operations, but will collaborate extensively with the Director of Development and the rest of our Development Team.\n\nWe have an open and collaborative environment where everyone works together to deliver what is needed, from product features to operations needs (e.g., health checks). \n\nWe value open and direct communication, taking calculated risks that will push us forward, and investing in our people. \n\nOur Stack \n\n\n* We have current Production and Continuous Integration footprints in Azure and AWS\n\n* Our front-end applications leverage .Net, Vue.js and Java\n\n* Our APIs comprise of .NET and java\n\n* Our backend comprises of MS SQL Server, Oracle and AWS Aurora\n\n* We currently have a CI pipeline that we are looking to take to the next level to help with our growth in customers and employee base\n\n\n\n\n\nUnanet is proud to be an Equal Opportunity Employer. Applicants will be considered for positions without regard to race, religion, sex, national origin, age, disability, veteran status or any other consideration made unlawful by applicable federal, state or local laws \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, DevOps, Amazon, Microsoft, SaaS and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Security Scorecard - We are revolutionizing the cybersecurity industry and want to re-open this job? Use the edit link in the email when you posted the job!
\nOpportunity\n\nSecurityScorecard is hiring a DevOps Engineer to bridge the gap between our global development and operational teams who is motivated to help continue automating and scaling our infrastructure. The DevOps Engineer will be responsible for setting up and managing the operation of project development and test environments as well as the software configuration management processes for the entire application development lifecycle. Your role would be to ensure the optimal availability, latency, scalability, and performance of our product platforms. You would also be responsible for automating production operations, promptly notifying backend engineers of platform issues, and checking long term quality metrics.\n\nOur infrastructure is based on AWS with a mix of managed services like RDS, ElastiCache, and SQS, as well as hundreds of EC2 instances managed with Ansible and Terraform. We are actively using three AWS regions, and have equipment in several data centers across the world.\n\nRegions: North America (GMT-7.00) Mountain time - (GMT-4.00) Atlantic time\n\nResponsibilities\n\n\n* Training, mentoring, and lending expertise to coworkers with regards to operational and security best practises. \n\n* Reviewing and providing feedback on GitHub Pull Requests to team members AND development teams- a significant percentage of our Software Engineers have written Terraform.\n\n* Identifying opportunities for technical and process improvement and owning the implementation. \n\n* Championing the concepts of immutable containers, Infrastructure as Code, stateless applications, and software observability throughout the organization.\n\n* Systems performance tuning with a focus on high availability and scalability.\n\n* Building tools to ease the usability and automation of processes\n\n* Keeping products up and operating at full capacity\n\n* Assisting with migration processes as well as backup and replication mechanisms\n\n* Working on a large-scale distributed environment where you were focused on scalability/reliability/performance\n\n* Ensuring proper monitoring / alerting are configured\n\n* Investigating incidents and performance lapses\n\n\n\n\nCome help us with projects such as…\n\n\n* Extending our compute clusters to support low latency, on-demand job execution\n\n* Turning pets into cattle\n\n* Cross region replication of systems and corresponding data to support low latency access\n\n* Rolling out application performance monitoring to existing services, extending integrations where required\n\n* Migration from self hosted ELK to a SaaS stack\n\n* Continuous improvement of CI/CD processes making builds & deployments faster, safer, and more consistent\n\n* Extending a Global VPN WAN to a datacenter with IPSec+BGP\n\n\n\n\nRequirements\n\n\n* 3+ years of DevOps and/or Operations experience in a Linux based environment\n\n* 1+ years of production environment experience with Amazon Web Services (AWS)\n\n* 1+ years using SQL databases (MySQL, Oracle, Postgres)\n\n* Strong scripting abilities (bash/python)\n\n* Strong Experience with CI/CD processes (Jenkins, Ansible) and automated configuration tools (Puppet/Chef/Ansible)\n\n* Experience with container orchestration (AWS ECS, Kubernetes, Marathon/Mesos)\n\n* Ability to work as part of a highly collaborative team\n\n* Understanding of monitoring tools like DataDog\n\n* Strong written and verbal communication skills\n\n\n\n\nNice to Have\n\n\n* You knew exactly what was meant by "Turning pets into cattle"\n\n* Experience working with Kubernetes on bare-metal and/or the AWS Elastic Kubernetes Service.\n\n* Experience with RabbitMQ, MongoDB, or Apache Kafka.\n\n* Experience with Presto or Apache Spark.\n\n* Familiarity with computation orchestration tools such as HTCondor, Apache Airflow, or Argo.\n\n* Understanding of network concepts- OSI layers, firewalls, DNS, split horizon DNS, VPN, routing, BGP, etc.\n\n* A deep understanding of AWS IAM, and how it interacts with S3 buckets.\n\n* Experience with SAFe.\n\n* Strong programming skills in 2+ languages.\n\n\n\n\nTooling We Use\n\n\n* Data definition, format and interfaces\n\n\n\n* Definitions - Protobuf V3\n\n* Normalize from - JSON / XML / CSV\n\n* Normalize to - Protobuf / ORC\n\n* Interfaces - REST API(s) and object store buckets\n\n\n\n* Cloud Services - Amazon Web Services\n\n* Databases: Postgresql, PrestoDB\n\n* Cache: Redis, Varnish\n\n* Languages: Python / C++14 / Scala / Golang / Javascript / Ruby / Java\n\n* Job Orchestration - HTCondor / Apache Airflow / Rundeck\n\n* Analytics - Spark \n\n* Storage: NFS/EFS, AWS S3, HDFS\n\n* Computation - Docker Containers / VMs / Metal / EMR\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, InfoSec, Senior, Engineer, JavaScript, Amazon, Python, Scala, Ruby, SaaS, Golang, Apache, Linux and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for SecurityScorecard and want to re-open this job? Use the edit link in the email when you posted the job!
\nThe Opportunity\n\nSecurityScorecard is hiring an Ops Engineer to bridge the gap between our global development and operational teams who is motivated to help continue automating and scaling our infrastructure. The Ops Engineer will be responsible for setting up and managing the operation of project development and test environments as well as the software configuration management processes for the entire application development lifecycle. Your role would be to ensure the optimal availability, latency, scalability, and performance of our product platforms. You would also be responsible for automating production operations, promptly notifying backend engineers of platform issues, and checking long term quality metrics.\n\nOur infrastructure is based on AWS with a mix of managed services like RDS, ElastiCache, and SQS, as well as hundreds of EC2 instances managed with Ansible and Terraform. We are actively using three AWS regions, and have equipment in several data centers across the world.\n\n\nResponsibilities\n\n\n* Training, mentoring, and lending expertise to coworkers with regards to operational and security best practises. \n\n* Reviewing and providing feedback on GitHub Pull Requests to team members AND development teams- a significant percentage of our Software Engineers have written Terraform.\n\n* Identifying opportunities for technical and process improvement and owning the implementation. \n\n* Championing the concepts of immutable containers, Infrastructure as Code, stateless applications, and software observability throughout the organization.\n\n* Systems performance tuning with a focus on high availability and scalability.\n\n* Building tools to ease the usability and automation of processes\n\n* Keeping products up and operating at full capacity\n\n* Assisting with migration processes as well as backup and replication mechanisms\n\n* Working on a large-scale distributed environment where you were focused on scalability/reliability/performance\n\n* Ensuring proper monitoring / alerting are configured\n\n* Investigating incidents and performance lapses\n\n\n\n\nCome help us with projects such as…\n\n\n* Extending our compute clusters to support low latency, on-demand job execution\n\n* Turning pets into cattle\n\n* Cross region replication of systems and corresponding data to support low latency access\n\n* Rolling out application performance monitoring to existing services, extending integrations where required\n\n* Migration from self hosted ELK to a SaaS stack\n\n* Continuous improvement of CI/CD processes making builds & deployments faster, safer, and more consistent\n\n* Extending a Global VPN WAN to a datacenter with IPSec+BGP\n\n\n\n\n\nRequirements\n\n\n* 3+ years of DevOps and/or Operations experience\n\n* 1+ years of production environment experience with Amazon Web Services (AWS)\n\n* 1+ years using SQL databases (MySQL, Oracle, Postgres)\n\n* Scripting ability (Bash, Python, C++ a plus)\n\n* Strong Experience with CI/CD processes (Jenkins, Ansible) and automated configuration tools (Puppet/Chef/Ansible)\n\n* Experience with container orchestration (AWS ECS, Kubernetes, Marathon/Mesos)\n\n* Ability to work as part of a highly collaborative team\n\n* Understanding of monitoring tools like DataDog\n\n* Strong written and verbal communication skills\n\n\n\n\nNice to Have\n\n\n* You knew exactly what is meant by "Turning pets into cattle"\n\n* Experience working with Kubernetes on bare-metal and/or the AWS Elastic Kubernetes Service.\n\n* Experience with RabbitMQ, MongoDB, or Apache Kafka.\n\n* Experience with Presto or Apache Spark.\n\n* Familiarity with computation orchestration tools such as HTCondor, Apache Airflow, or Argo.\n\n* Understanding of network concepts- OSI layers, firewalls, DNS, split horizon DNS, VPN, routing, BGP, etc.\n\n* A deep understanding of AWS IAM, and how it interacts with S3 buckets.\n\n* Experience with SAFe.\n\n* Strong programming skills in 2+ languages.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Engineer, Amazon, SaaS, Apache and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for EMS Software and want to re-open this job? Use the edit link in the email when you posted the job!
\nEMS Software is looking for a Cloud Operations Engineer who will aid us in the ongoing transformation of our product offering from an on-premise solution to one having a hybrid offering with a pure SaaS presence.\n\nYou will be at the center of a vital growth initiative.\n\nYou’ll join a company that serves 2,500 great organizations like Accenture, Deloitte, Goldman Sachs, Harvard and Yale University. Our customers have millions of people using our software to manage events, reserve spaces to meet, work and study; and to analyze and optimize their use of real estate.\n\nWe’re looking for an engineer with a development background who has some operational experience and expertise spanning high availability systems in both lower and production environments, a DevOps mentality of continuously improving the system, and a firm grasp on automation and cloud architectures. You must have extensive experience supporting applications developing in at least 3 of the following: .NET, Java, JavaScript, Python, Node, GO or Ruby. You should also be passionate about solving problems and developing creative solutions leveraging automation.\n\nWhat You’ll Do\n\n\n* Design, provision, configure and maintain the platform operations to handle the scale of running several application stacks in the cloud that will be consumed worldwide\n\n* Automate the deployment and maintenance of cloud platform technologies\n\n* Oversee production operations, log management, data warehouse, and database operations, including management of Splunk services\n\n* Ensure all monitoring systems (IT, development, service management, Apdex) are in place\n\n* Enforce consistency of monitoring, reporting, and alarming systems\n\n* Help drive process improvements for service management, including: outage/incident management, rollbacks and reporting\n\n* Research emerging virtualization techniques and advise management\n\n* Perform capacity management, load and scalability planning\n\n* Ensure compliance with deployment and operations documentation\n\n* Assist management in development and optimization of operational cost models\n\n* Design cloud infrastructure for high reliability and availability\n\n* Build strategic and tactical plans for continued improvement of cloud architecture and operations\n\n* Assist in the establishment of 24x7 performance monitoring and response protocols\n\n* Provide on-call support outside of normal work hours/days\n\n\n\n\nAbout You\n\n\n* You’re driven, humble, and autonomous\n\n* You’re a quick study, a strong communicator, and you’re able to adapt to a fast-paced environment\n\n* You have a working knowledge of Agile Development practices (e.g., SCRUM, TDD)\n\n* You are or have the mindset of a developer, but are intrigued by the operational aspects of hosting developed solutions\n\n* You are devoted to automation\n\n* You’re an expert in Windows (IIS, SQL Server) and Linux\n\n* You have at least 1 years of hands-on production experience with Amazon Web Services (AWS), Google Cloud or Microsoft Azure. This includes:\n\n\n* Configuration of VPCs, with VPN to corporate network\n\n* Experience setting up, maintaining and monitoring global production environments, QA and staging environments, with a strong understanding of the differing needs of such environments\n\n* At least 6 months of experience in a professional production environment\n\n* At least 6 months of experience managing networking infrastructure and monitoring at the application level\n\n\n\n\n\n* Performance optimization experience, including: troubleshooting and resolving network and server latency issues; performing hardware evaluation/selection tasks; performance vs cost vs time analysis\n\n* At least 1 year of experience with automation or scripting tools (e.g., GO, Python, Shell, PowerShell)\n\n* At least 6 months of experience with Ansible, Jenkins\n\n* You’re detail-oriented, with excellent documentation skills, and you’re someone who can successfully manage multiple priorities\n\n* Troubleshooting skills that range from diagnosing hardware/software issues to large scale failures within a complex infrastructure\n\n\n\n\nOther Things We Hope You Have \n\n\n* Bachelors in Computer Science or equivalent work experience\n\n* Experience with Mongo, MS SQL Server, Splunk, Grafana, Terraform and Prometheus\n\n* Experience working with Docker, Kubernetes and GO Hands-on experience with performance, load and security penetration testing\n\n* Hands-on experience with building out and maintaining a continuous integration and delivery pipeline\n\n\n\n\nThe Team\n\nYou will be part of a 6-person team of 4 Operational Engineers, a Director of Cloud Operations, and a Technical Product Owner. \n\nThe larger team consists of 13 Developers, 10 Quality Engineers, 4 Product Owners, and 3 UX Designers. We have an open and collaborative environment where everyone works together to deliver what is needed, from product features to operations needs (e.g., health checks).\n\nWe value open and direct communication, taking calculated risks that will push us forward, and investing in our people.\n\nOur Stack\n\n\n* We have current Production and Continuous Integration footprints in Google Cloud (primary), AWS, and Azure\n\n* Our front-end applications leverage React and React Native, Redux, Node, C#, and Knockout\n\n* Our APIs comprises of Golang, .NET and .NET core\n\n* Our backend comprises of MS SQL Server\n\n* We have a well built out CI pipeline that allows us to deploy and stand up customers on demand\n\n* We leverage Ansible heavily, Splunk (JSON Logs) is our blood line and we enjoy operational efficiency and accessibility through Hubot and StackStorm\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, Ops, React, DevOps, Amazon, Microsoft, SaaS and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for VOSTROM and want to re-open this job? Use the edit link in the email when you posted the job!
\nDevOps Engineer– Emphasis on Linux / Docker / Node.js / Elasticsearch / MongoDB\nThe Opportunity:\nWe're looking for an experienced DevOps engineer based in Phoenix, AZ, Virginia Beach, VA or the Washington, DC metro area, however remote (tele) workers will be considered for the position also if you have excellent communication skills and are willing to travel to one of the above locations several times per year.\nThe Day to Day:\n* Provide operational support and automation tools to application developers \n* Bridge the gap between development and operations to ensure successful delivery of projects \n* Participate as a member of the application development team \n* Build back-end frameworks that are maintainable, flexible and scaleable\n* Operate and scale the application back-end including the database clusters \n* Anticipate tomorrow's problems by understanding what users are trying to accomplish today \n\n\nRequirements:\n* DevOps experience with Linux or FreeBSD \n* Experience with Linux Containers and Docker \n* Configuration management experience, Salt Stack preferred \n* Exposure to the deployment and operations of node.js applications \n* Experience operating and optimizing Elasticsearch at large scale\n* Operational experience with Hadoop, MongoDB, Redis, Cassandra, or other distributed big data systems \n* Experience with any of JavaScript, Python, Ruby, Perl and/or shell scripting \n* Comfort with compute clusters and many terabytes of data \n* US Citizenship / Work Authorization\n\n\nBonus Points:\n* Development experience with Node.js or other HTTP backend tools\n* Mac OS X familiarity \n* BS or MS in a technology or scientific field of study\n* High energy level and pleasant, positive attitude!\n* Evidence of working well within a diverse team\n\n\nCompensation:\n* Salary commensurate with experience, generally higher than competitive industries\n* Comprehensive benefits package\n* Opportunities for advancement and a clear career path\n\n\nAbout Us:\nWe conduct advanced technical research and develop innovative software and systems that help meet network security and reliability challenges for organizations world-wide. You can read more at our web site. \nCareer Opportunities:\nWe have many other openings available. For a complete listing, visit jobs.vostrom.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, JavaScript, InfoSec, Elasticsearch, Java, Perl, Python, Node, Ruby, Admin, Excel, Engineer, Sys Admin, Cassandra, Backend, Design, Docker, Digital Nomad, Travel and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.