The Digital Modernization Sector has an exciting career opportunity for an Kubernetes Engineer Colorado Springs, CO to support the US Space Forceโs Space Systems Command (SSC), Operational Command and Control Acquisition Delta, known as Kobayashi Maru. This role is instrumental in the development and deployment of mission critical software for space defense, space domain awareness, and enabling data services. This role Primary Responsibilities: Design, implement, and maintain highly available Kubernetes clusters across cloud and on-prem environments. Automate infrastructure provisioning, monitoring, and scaling using Infrastructure as Code (IaC) and CI/CD pipelines. Develop and manage Helm charts for application deployment and configuration management. Deploy and manage applications on cloud platforms such as Azure, AWS, Google Cloud, and Oracle Cloud Infrastructure (OCI). Monitor and troubleshoot Kubernetes workloads, networking, and persistent storage solutions. Implement Kubernetes security best practices, including RBAC, network policies, and container runtime security. Optimize performance and reliability of containerized applications in distributed systems. Collaborate with development, security, and operations teams to enhance DevOps workflows and cloud-native application delivery. Integrate Kubernetes with service meshes, logging, and observability tools such as Istio, Prometheus, Grafana, and ELK Stack. Participate in system upgrades, disaster recovery planning, and compliance initiatives such as NIST, CIS Benchmarks, and FedRAMP. Mentor junior engineers and contribute to knowledge sharing within the organization. Basic Qualifications: Requires BS and 8+ years of prior relevant experience or Masters with 6+ years of prior relevant experience, additional years of experience will be accepted in lieu of a degree. Minimum 5+ years of experience working with Kubernetes in production environments. Must have a DoD-8570 IAT Level 2 baseline certification (Security+ CE or equivalent) to start and maintain. Must have Certified Kubernetes Application Developer (CKAD), and Azure Certified DevOps Engineer โ Professional or equivalent cloud certifications. Strong expertise in Kubernetes administration, troubleshooting, and performance tuning. Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) and their Kubernetes services (EKS, AKS, GKE). Proficiency in containerization technologies like Docker and container runtime management. Solid understanding of Infrastructure as Code (Terraform, Ansible, CloudFormation). Experience with CI/CD pipelines using tools like GitLab CI/CD, Jenkins, ArgoCD, or Tekton. Deep knowledge of Kubernetes networking (Calico, Cilium, Istio, or Linkerd) and storage solutions (Ceph, Portworx, Longhorn). Expertise in monitoring and logging with Prometheus, Grafana, ELK, or OpenTelemetry. Strong scripting skills in Bash, Python, or Golang for automation. Familiarity with Kubernetes security best practices, including Pod Security Standards, RBAC, and image scanning tools (Trivy, Aqua, or Falco). Experience with GitOps methodologies (ArgoCD, FluxCD) Knowledge of serverless computing and Kubernetes-based event-driven architectures Familiarity with service meshes and API gateways (Istio, Envoy, Traefik) Hands-on experience with AWS, Azure, or Google Cloud Platform security tools and configurations. Proficiency in cloud security frameworks such as CSA CCM (Cloud Controls Matrix), FedRAMP, or similar. Experience embedding security in CI/CD pipelines using tools like Jenkins, GitLab, or GitHub Actions. Experience with automation tools (e.g., Terraform, Ansible, or CloudFormation) and scripting languages (e.g., Python, PowerShell, or Bash). Extensive experience with containerization and orchestration platforms like Kubernetes. Strong analytical and problem-solving skills with the ability to communicate complex technical concepts to non-technical stakeholders. Knowledge of hybrid cloud networking (e.g., VPNs, ExpressRoute, Direct Connect). Experience with DevSecOps pipelines and integration Experience working in agile development and DevOps-driven environments. US Citizen and Possession of a current Active DoD TS/SCI Clearance Preferred Qualifications Masterโs degree in computer science Multi-Cluster & Hybrid Deployments โ Experience managing federated or multi-cluster Kubernetes environments across hybrid and multi-cloud architectures. Custom Kubernetes Operators โ Developing and maintaining Kubernetes Operators using the Operator SDK (Go, Python, or Ansible). Cluster API (CAPI) Expertise โ Experience with Cluster API for managing Kubernetes lifecycle across cloud providers. Advanced Scheduling & Tuning โ Custom scheduling, affinity/anti-affinity rules, and performance optimization for workloads. Kubernetes Hardening โ Deep knowledge of CIS benchmarks, PodSecurityPolicies (PSP), and Kyverno or Open Policy Agent (OPA). Original Posting: March 31, 2025 For U.S. Positions: While subject to change based on business needs, Leidos reasonably anticipates that this job requisition will remain open for at least 3 days with an anticipated close date of no earlier than 3 days after the original posting date as listed above. Pay Range: Pay Range $104,650.00 - $189,175.00 The Leidos pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law. Leidos Leidos is a Fortune 500ยฎ innovation company rapidly addressing the world's most vexing challenges in national security and health. The company's global workforce of 47,000 collaborates to create smarter technology solutions for customers in heavily regulated industries. Headquartered in Reston, Virginia, Leidos reported annual revenue of approximately $15.4 billion for the fiscal year ended December 29, 2023. For more information visit www.Leidos.com. Pay and Benefits Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Income Protection, Paid Leave and Retirement. More details are available here. Securing Your Data Leidos will never ask you to provide payment-related information at any part of the employment application process. And Leidos will communicate with you only through emails that are sent from a Leidos.com email address. If you receive an email purporting to be from Leidos that asks for payment-related information or any other personal information, please report the email to [email protected]. Commitment and Diversity All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law. Leidos will also consider for employment qualified applicants with criminal histories consistent with relevant laws. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, DevOps, Cloud, API, Junior, Golang and Engineer jobs that are similar:\n\n
$60,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\n6314 Remote/Teleworker US
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Who we are:We are a leader in fraud prevention and AML compliance. Our platform uses device intelligence, behavior biometrics, machine learning, and AI to stop fraud before it happens. Today, over 300 banks, retailers, and fintechs worldwide use Sardine to stop identity fraud, payment fraud, account takeovers, and social engineering scams. We have raised $75M from world-class investors including Andreessen Horowitz, Visa, Experian, FIS, and Google Ventures.Our culture:* We have hubs in the Bay Area, NYC, Austin, and Toronto. However, we have a remote-first work culture. #WorkFromAnywhere\n* We hire talented, self-motivated people and get out of their way\n* We value performance and not hours worked. We believe you shouldn't have to miss your family dinner, your kid's school play, or doctor's appointments for the sake of adhering to an arbitrary work schedule.\n\nAbout the Role:Site Reliability Engineers (SREs) are responsible for keeping all production services running smoothly. SREs are a blend of pragmatic operators and software craftspeople that apply sound engineering principles, operational discipline, and mature automation to our operating environments. As an SRE at Sardine, you will build and run the core components that processes billions of events to protect financial institutions from fraud and compliance risks. You will also partner with our other engineering teams to help make their services more performant, scalable, observable, and reliable. We believe every engineering team at Sardine should be responsible for the software they build, and SREs play a critical part in providing the tools, practices, and expertise to make that happen.You will:* Run our infrastructure with Terraform, CI/CD (Github and ArgoCD), and Kubernetes together with the devops team\n* Having a proactive approach to monitoring rather than a reactive approach. - Build monitoring that alerts on symptoms rather than on outages.\n\n* Participate in on-call rotations, along with every member of the engineering team\n* Improve and automate operational processes\n* Constantly improve the security of the product and security operation\n\n* Debug production issues across services and levels of the stack\n\n* Partner with engineering teams to ensure their products meet production standards\n\n* Be willing to go out of your comfort zone to unfamiliar territory to solve unique issues.\n* Help shape our company's engineering culture and keep high engineering standards\n\nAn ideal candidate has:* 5+ years experience designing, building, and operating large-scale production systems\n* Experience with Google Cloud Platform\n* Experience with monitoring tools like datadog and preferably open source toolings like prometheus/grafana/jaeger(tracing)\n* Good to have elastic search experience.\n* Experience with container orchestration tools like Kubernetes and tools that support Kubernetes deployment, like ArgoCD and helm.\n* Strong programming skills in primarily GoLang and/or any other languages\n* Strong knowledge about database optimization\n* Good knowledge of ensuring good security practices within cloud infrastructure.\n\nBenefits we offer:* Generous compensation in cash and equity\n* Early exercise for all options, including pre-vested\n* Work from anywhere: Remote-first Culture\n* Flexible paid time off, Year-end break, Self care days off\n* Health insurance, dental, and vision coverage for employees and dependents - US and Canada specific\n* 4% matching in 401k / RRSP - US and Canada specific\n* MacBook Pro delivered to your door\n* One-time stipend to set up a home office โ desk, chair, screen, etc.\n* Monthly meal stipend\n* Monthly social meet-up stipend\n* Annual health and wellness stipend\n* Annual Learning stipend\n* Unlimited access to an expert financial advisory\n\nJoin a fast-growing company with world-class professionals from around the world. If you are seeking a meaningful career, you found the right place, and we would love to hear from you. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Cloud and Golang jobs that are similar:\n\n
$55,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAirDNA began with a big dream in a balmy California garage in 2015. Since then, the technology startup has grown into the leading provider of data and business intelligence for the billion-dollar travel and vacation rental industryโwith offices in Denver and Barcelona. \n\n\nOur self-serve platform eliminates guesswork and equips Airbnb hosts with smart and competitive insights needed to succeed in the ever-evolving short-term rental landscape. \n\n\nWe also arm enterprise clients with customized reports and in-depth dashboards to ensure they can scale and invest strategically. These customers include hundreds of top financial institutions, real estate companies, vacation rental managers, and destination marketing organizations around the world. \n\n\nWe track the daily performance of over 10 million Airbnb and Vrbo properties across 120,000 global markets. We also collect data from over a million partner properties. This marriage of scraped and source data, enhanced by our proprietary algorithms, makes our solutions the most accurate and comprehensive in the world. \n\n\nWeโre firm believers that data isnโt the destination; itโs the starting point. The launchpad. The bedrock for any future-forward business.\n\n\nThe Role: \n Come join our Platform team and help drive our growth by designing, maintaining, and improving our platform and processes. The ideal person for this role is driven to design robust, scalable, secure infrastructure, cares about the details, and enjoys helping both individual engineers and their teams work more effectively.\n\n\n\nHere's what you'll get to do: \n* Build and maintain monitoring, logging, and telemetry solutions for proactive performance and reliability management (experience with Datadog, Prometheus, Grafana).\n* Evaluate and integrate new technologies to enhance platform capabilities, especially around containers, databases, and cloud-native architectures.\n* Ensure security, compliance, and cost optimization in all platform solutions, utilizing tools like IAM, GuardDuty, and AWS Security Hub.\n* Design, implement, and manage scalable infrastructure solutions using AWS services (EC2, S3, RDS, Lambda, CloudFront, etc.).\n* Manage, scale, and optimize multiple databases (PostgresQL + Druid) to ensure performance, availability, and redundancy.\n* Collaborate with development and operations teams to streamline release processes and integrate best practices for infrastructure as code (Terraform, CloudFormation).\n* Work closely with stakeholders to identify infrastructure needs and lead initiatives to scale the platform in alignment with business goals.\n* Drive continuous improvement in the platformโs architecture and processes, optimizing for performance, reliability, and operational efficiency.\n* Collaborate with cross-functional teams to align platform development with product goals and strategies.\n* Mentor and guide junior team members, providing technical leadership and driving best practices across the platform team.\n\n\n\nHere's what you'll need to be successful: \n* Strong familiarity with Amazon Web Services, multi-account experience preferred.\n* Expertise using Docker and Kubernetes.\n* Experience with developing and maintaining CI/CD pipelines to automate application deployment and infrastructure provisioning.\n* Able to diagnose and troubleshoot problems in a distributed microservice environment.\n* Solid understanding of TCP/IP networking.\n* Expertise with Linux (prefer Ubuntu, Alpine, and/or Amazon Linux).\n* Understanding of DevOps practices.\n* Demonstrated experience in managing or leading platform teams, with the ability to grow the team and develop talent within.\n\n\n\nHere's what would be nice to have:\n* Gitlab pipelines\n* ArgoCD\n* Linkerd, Istio or other service mesh\n* ELK stack or similar logging platforms\n* Ansible or other configuration management tools\n* Cloudformation or other IaC tools\n* JSON/YAML\n* OpenVPN\n* Apache Airflow\n* Databases (postgres and Druid preferred)\n* Cloudflare\n* Atlassian tools such as Jira, Confluence, StatusPage\n* Programming experience: shell scripting, Python, Golang preferred\n* Experience with performance optimization of distributed microservices\n\n\n\nHere's what you can expect from us: \n* Competitive cash compensation and benefits, the salary range for this position is $150,000 - $180,000 per year. \n\nColorado Salary Statement: The salary range displayed in specifically for those potential hired who will work or reside in the state of Colorado if selected for this role. Any offered salary is determined based on internal equity, internal salary ranges, market data/ranges, applicantโs skills and prior relevant experience, certain degrees and certifications. \nBenefits include:\n* Medical, dental, and vision packages to meet your needs\n* Unlimited vacation policy; take time when you need it \n* Eligible for Companyโs annual discretionary bonus program\n* 401K with employer match up to 4%\n* Continuing education stipend\n* 16 weeks of paid parental leave\n* New MacBooks for employees\n\n\nOffice Perks for Denver Based Employees:\n* Commuter/RTD benefit\n* Quarterly team outings\n* In-office lunch Tuesday - Thursday\n* We have a great office located just a few blocks away from Union Station in the heart of Denverโs historic LoDo neighborhoodโhigh ceilings, exposed brick, a fully stocked kitchen (snacks, espresso, etc.), and plenty of meeting rooms and brainstorming nooks\n* Pet-friendly!\n\n\n\n\n\n\nAirDNA seeks to attract the best-qualified candidates who support the mission, vision and values of the company and those who respect and promote excellence through diversity. We are committed to providing equal employment opportunities (EEO) to all employees and applicants without regard to race, color, creed, religion, sex, age, national origin, citizenship, sexual orientation, gender identity and expression, physical or mental disability, marital, familial or parental status, genetic information, military status, veteran status or any other legally protected classification. The company complies with all applicable state and local laws governing nondiscrimination in employment and prohibits unlawful harassment based on any of the aforementioned protected classes at every location in which the company operates. This applies to all terms, conditions and privileges of employment including but not limited to: hiring, assessments, probation, placement, benefits, promotion, demotion, termination, layoff, recall, transfer, leave of absence, compensation, training and development, social and recreational programs, education assistance and retirement. \n\n\nWe are committed to making our application process and workplace accessible for individuals with disabilities. Upon request, AirDNA will reasonably accommodate applicants so they can participate in the application process unless doing so would create an undue hardship to AirDNA or a threat to these individuals, others in the workplace or the company as a whole. To request accommodation, please email [email protected]. Please allow for 24 hours to process your request. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, Docker, DevOps, Education, Senior, Marketing, Golang and Engineer jobs that are similar:\n\n
$50,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Snowplow Analytics and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nAt Snowplow, we are on a mission to empower people to use data to differentiate. We are able to provide technology which enables customers to not only control their data, but allows them to do amazing things with that control.\n\nAs part of that effort, we're changing the way that people do digital analytics by moving companies away from one-size-fits-all vendors, such as Google Analytics and Adobe, to dictate what should be done with their data and enabling them to collect and own their data themselves.\n\nThe opportunity\n\nOur Managed Service offering has grown significantly over the last year, and we now orchestrate and monitor the Snowplow event pipeline across more than 100 customer-owned AWS accounts, with individual accounts processing many billions of events per month.\n\nWe are looking for our second Site Reliability Engineer to help us grow to managing 1,000 and then 10,000 AWS, GCP and Azure accounts. You’ll work closely with our Tech Ops Lead, on all aspects of our proprietary deployment, orchestration and monitoring stack.\n\nThe team and mission:\n\nTechnical Operations at Snowplow is responsible for two distinct domains:\n\n* Snowplow’s internal infrastructure, which powers Snowplow Insights, CI/CD, the Snowplow website, and our support tooling, all running on AWS\n\n* Our customers’ Snowplow-related infrastructure, running in their own AWS account\n\n\n\nWithin both domains, Tech Ops at Snowplow is striving to increase service reliability, fulfil customer requests in a timely fashion, and automate recurring tasks. Task automation is essential as our customer base grows, because our “infrastructure estate” scales linearly with our customer numbers, unlike most software businesses.\n\nOur roadmap includes:\n\n\n* Deploying, orchestrating and monitoring Snowplow on GCP, Azure and on-premise, not just AWS\n\n* “One click” infrastructure deployment and maintenance\n\n* Building self-healing and self-upgrading infrastructure, which learns how to optimize itself for cost, performance and reliability\n\n\n\n\nThis is an enormously ambitious undertaking but also, we hope, a hugely exciting infrastructure automation challenge!\n\nTechnologies:\n\nToday, our in-house stack uses pragmatic technologies including Docker, Ansible, Consul, CloudFormation, bash and Golang to manage our internal and customer infrastructure.\n\nFor our next level of automation, we are now exploring tools such as Terraform, Kubernetes and Vault.\n\nResponsibilities:\n\n* The development of software for the purposes of automating, monitoring and maintaining client-deployed and Snowplow-internal infrastructure and services\n\n* Providing deep technical support to internal and client teams\n\n* Performing planned upgrades and modifications to customer infrastructure\n\n* Handling high-severity internal or customer incidents, ensuring we meet all SLAs\n\n\n\nWithin the software engineering side you will be responsible for the implementation, deployment and stability of your systems and services. You will own software end to end with a high expectation of ownership over anything that is deployed.\n\nWithin the operational side you will join our on-call process for incident resolution, and be in the assignment for the regular client infrastructure work, with a strong mandate to continuing automation.\n\nWhat we are looking for:\n\nThis role will be a great fit for somebody who:\n\n\n* Has deep knowledge of Linux, networking, containers and similar, able to troubleshoot complex problems on individual servers and distributed systems\n\n* Has worked with at least one of: Amazon Web Services, GCP or Azure\n\n* Has been part of an on-call rotation\n\n* Has interacted directly with customers to solve their specific technical issues\n\n* Is comfortable scripting in one or more of: Bash, Python, Ruby or Perl\n\n* Is comfortable programming in one or more of: Java, Scala, Golang or Python\n\n\n\n\nThis role would be a great fit for a software engineer or systems administrator who wants to transition into a full SRE role.\n\nSecurity:\n\nThe integrity of our customers' systems and data underpin everything we do at Snowplow. As part of their probation, candidates will be put through a full background security check.\n\nOut-of-hours work:\n\nAn important part of this role relates to out-of-hours work, particularly around:\n\n* Performing planned upgrades and modifications to customer infrastructure outside of their working hours\n\n* Being on-call to handle high-severity internal or customer incidents, ensuring we meet all SLAs\n\n\n\nThe on-call process for the Tech Ops team is still evolving; we will discuss these requirements with short-listed candidates.\n\nWhat you’ll get in return:\n\n\n* Competitive package based on experience\n\n* 25 days holiday a year plus bank holidays\n\n* The freedom to work wherever suits you best\n\n* Two fantastic company away-weeks a year\n\n* Working alongside a strong and talented team\n\n\n\n\nOffice-specific:\n\n\n* Convenient central Shoreditch location\n\n* Continuous supply of Pact coffee\n\n* Regular mystery events\n\n* MacBook\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Admin, Engineer, Sys Admin, Amazon, Ruby, Stats and Golang jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.