The Digital Modernization Sector has an exciting career opportunity for an Kubernetes Engineer Colorado Springs, CO to support the US Space Forceโs Space Systems Command (SSC), Operational Command and Control Acquisition Delta, known as Kobayashi Maru. This role is instrumental in the development and deployment of mission critical software for space defense, space domain awareness, and enabling data services. This role Primary Responsibilities: Design, implement, and maintain highly available Kubernetes clusters across cloud and on-prem environments. Automate infrastructure provisioning, monitoring, and scaling using Infrastructure as Code (IaC) and CI/CD pipelines. Develop and manage Helm charts for application deployment and configuration management. Deploy and manage applications on cloud platforms such as Azure, AWS, Google Cloud, and Oracle Cloud Infrastructure (OCI). Monitor and troubleshoot Kubernetes workloads, networking, and persistent storage solutions. Implement Kubernetes security best practices, including RBAC, network policies, and container runtime security. Optimize performance and reliability of containerized applications in distributed systems. Collaborate with development, security, and operations teams to enhance DevOps workflows and cloud-native application delivery. Integrate Kubernetes with service meshes, logging, and observability tools such as Istio, Prometheus, Grafana, and ELK Stack. Participate in system upgrades, disaster recovery planning, and compliance initiatives such as NIST, CIS Benchmarks, and FedRAMP. Mentor junior engineers and contribute to knowledge sharing within the organization. Basic Qualifications: Requires BS and 8+ years of prior relevant experience or Masters with 6+ years of prior relevant experience, additional years of experience will be accepted in lieu of a degree. Minimum 5+ years of experience working with Kubernetes in production environments. Must have a DoD-8570 IAT Level 2 baseline certification (Security+ CE or equivalent) to start and maintain. Must have Certified Kubernetes Application Developer (CKAD), and Azure Certified DevOps Engineer โ Professional or equivalent cloud certifications. Strong expertise in Kubernetes administration, troubleshooting, and performance tuning. Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) and their Kubernetes services (EKS, AKS, GKE). Proficiency in containerization technologies like Docker and container runtime management. Solid understanding of Infrastructure as Code (Terraform, Ansible, CloudFormation). Experience with CI/CD pipelines using tools like GitLab CI/CD, Jenkins, ArgoCD, or Tekton. Deep knowledge of Kubernetes networking (Calico, Cilium, Istio, or Linkerd) and storage solutions (Ceph, Portworx, Longhorn). Expertise in monitoring and logging with Prometheus, Grafana, ELK, or OpenTelemetry. Strong scripting skills in Bash, Python, or Golang for automation. Familiarity with Kubernetes security best practices, including Pod Security Standards, RBAC, and image scanning tools (Trivy, Aqua, or Falco). Experience with GitOps methodologies (ArgoCD, FluxCD) Knowledge of serverless computing and Kubernetes-based event-driven architectures Familiarity with service meshes and API gateways (Istio, Envoy, Traefik) Hands-on experience with AWS, Azure, or Google Cloud Platform security tools and configurations. Proficiency in cloud security frameworks such as CSA CCM (Cloud Controls Matrix), FedRAMP, or similar. Experience embedding security in CI/CD pipelines using tools like Jenkins, GitLab, or GitHub Actions. Experience with automation tools (e.g., Terraform, Ansible, or CloudFormation) and scripting languages (e.g., Python, PowerShell, or Bash). Extensive experience with containerization and orchestration platforms like Kubernetes. Strong analytical and problem-solving skills with the ability to communicate complex technical concepts to non-technical stakeholders. Knowledge of hybrid cloud networking (e.g., VPNs, ExpressRoute, Direct Connect). Experience with DevSecOps pipelines and integration Experience working in agile development and DevOps-driven environments. US Citizen and Possession of a current Active DoD TS/SCI Clearance Preferred Qualifications Masterโs degree in computer science Multi-Cluster & Hybrid Deployments โ Experience managing federated or multi-cluster Kubernetes environments across hybrid and multi-cloud architectures. Custom Kubernetes Operators โ Developing and maintaining Kubernetes Operators using the Operator SDK (Go, Python, or Ansible). Cluster API (CAPI) Expertise โ Experience with Cluster API for managing Kubernetes lifecycle across cloud providers. Advanced Scheduling & Tuning โ Custom scheduling, affinity/anti-affinity rules, and performance optimization for workloads. Kubernetes Hardening โ Deep knowledge of CIS benchmarks, PodSecurityPolicies (PSP), and Kyverno or Open Policy Agent (OPA). Original Posting: March 31, 2025 For U.S. Positions: While subject to change based on business needs, Leidos reasonably anticipates that this job requisition will remain open for at least 3 days with an anticipated close date of no earlier than 3 days after the original posting date as listed above. Pay Range: Pay Range $104,650.00 - $189,175.00 The Leidos pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law. Leidos Leidos is a Fortune 500ยฎ innovation company rapidly addressing the world's most vexing challenges in national security and health. The company's global workforce of 47,000 collaborates to create smarter technology solutions for customers in heavily regulated industries. Headquartered in Reston, Virginia, Leidos reported annual revenue of approximately $15.4 billion for the fiscal year ended December 29, 2023. For more information visit www.Leidos.com. Pay and Benefits Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Income Protection, Paid Leave and Retirement. More details are available here. Securing Your Data Leidos will never ask you to provide payment-related information at any part of the employment application process. And Leidos will communicate with you only through emails that are sent from a Leidos.com email address. If you receive an email purporting to be from Leidos that asks for payment-related information or any other personal information, please report the email to [email protected]. Commitment and Diversity All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law. Leidos will also consider for employment qualified applicants with criminal histories consistent with relevant laws. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, DevOps, Cloud, API, Junior, Golang and Engineer jobs that are similar:\n\n
$60,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\n6314 Remote/Teleworker US
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nBeware of fraudulent recruitment activities impersonating Degreed. Scammers are using our name, โDegreedโ, impersonating our website, and claiming to be affiliated with Degreed as part of a recruitment scam. Please note that Degreed does not recruit talent through WhatsApp, Telegram or any other direct-messaging systems other than Degreed.com e-mail and, during the interview process, phone numbers. We also do not request sensitive personal or financial information in an unsolicited manner, nor do we offer employment opportunities that require upfront payments or promise unrealistic returns.\n\nDegreed is the upskilling platform that connects learning to opportunities. We integrate everything people use to learn and build their careersโskill insights, LMSs, courses, videos, articles, and projectsโand match everyone to growth opportunities that fit their unique skills, roles, and goals.\n\nThe Degreed Client Experience (CX) team plays a crucial role in ensuring customer satisfaction and success. The CX teamsโ deep knowledge allows them to strategically guide clients, providing tremendous value. The CX team actively participates and assists clients with their learning journeys and transformations. The candidate will be working closely with the CX technical teams to design and maintain client support workflows powered by AI.\n\nThis role will be based onsite in Bengaluru, India. After an in-office onboarding period, incumbents are expected to be available in office for a few days per week as part of hybrid work model. Candidates will also be required to travel internationally 1-2 times annually for full company gatherings.\nDay in the Life\n\n\n* Design, develop, and maintain cloud-based AI applications, leveraging a full-stack technology stack to deliver high-quality, scalable, and secure solutions. \n\n* Collaborate with cross-functional teams, including product managers, data scientists, and other engineers, to define and implement analytics features and functionality that meet business requirements and user needs. \n\n* Utilize Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. \n\n* Develop and maintain APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation. \n\n* Implement robust security measures to protect sensitive data and ensure compliance with data privacy regulations and organizational policies. \n\n* Continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. \n\n* Participate in code reviews and contribute to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code. \n\n* Stay current with emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identify opportunities to enhance the capabilities of the analytics platform. \n\n* Collaborate with DevOps and infrastructure teams to automate deployment and release processes, implement CI/CD pipelines, and optimize the development workflow for the analytics engineering team. \n\n* Collaborate closely with and influence business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across a variety of sectors. \n\n* Influence, educate and directly support the analytics application engineering capabilities of our clients \n\n\n\nWho You Are\n\nWe seek outstanding individuals to join our outstanding teams. As a Lead AI Software Engineer you not only want to deliver great products, you also want to collaborate with other great engineers:\n\n\n* 5+ years at Senior or Staff level, or equivalent Software Development Experience\n\n* Experience with client-side technologies such as React, Angular, Vue.js, HTML and CSS \n\n* Experience with server-side technologies such as, Django, Flask, Fast API\n\n* Experience with Cloud platforms and services (AWS, Azure, GCP) via Terraform Automation (good to have)\n\n* 2+ years of Python \n\n* Experience with Git for versioning and collaborating \n\n* Experience with DevOps, CI/CD, Github Actions \n\n* Exposure to LLMs, Prompt engineering, Langchain a plus \n\n* Experience with workflow orchestration - doesnโt matter if itโs dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other \n\n* Experience implementation of large-scale structured or unstructured databases, orchestration and container technologies such as Docker or Kubernetes \n\n* Strong interpersonal and communication skills, including the ability to explain and discuss complex engineering technicalities with colleagues and clients from other disciplines at their level of cognition \n\n* Curiosity, proactivity and critical thinkingโฏ \n\n* Strong computer science fundaments in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and implications of computer architecture on software performance. \n\n* Strong knowledge in designing API interfaces \n\n* Knowledge of data architecture, database schema design and database scalability \n\n* Agile development methodologies \n\n\n\n\n \n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Docker, Travel, DevOps, Cloud, HTML, Git, API, Senior and Engineer jobs that are similar:\n\n
$82,500 — $127,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nBengaluru, Karnataka, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Our Mission: \nTo capture the hearts and minds of millions of players across the world by creating unforgettable games powered by the best technology.\n\nAbout Us:\n\nPlayQ is a rapidly growing global entertainment and technology company delivering high-quality mobile titles and innovative game development solutions to a worldwide audience. Our games have been downloaded more than 60 million times across the globe, with millions of users playing every day! \n\nOur dedicated teams, based in downtown Santa Monica, CA, work together to craft the clever, visually stunning, and unforgettable experiences that our players love. Our emphasis on individual leadership means each team member has the opportunity to make a big impact, while our commitment to creative freedom gives them the ability to create whatever they can imagine. \n\nItโs this mindset that has led us to develop our own IP, infuse games with rich storytelling, build our own development tools, and solve the deepest technical challenges - all in the name of disrupting the mobile gaming landscape.\n\nWhy you'll want to come to work:\n\nAs a Lead DevOps Engineer, you will play a pivotal role in spearheading the design, implementation, and maintenance of robust and scalable infrastructure solutions. You will manage DevOps engineering projects on the team, collaborate closely with our global game and services development teams, and automate processes to enhance overall company efficiency. You will be at the forefront of overseeing the design, creation and management of CI/CD pipelines, cloud infrastructure, and backend service deployments to ensure seamless and automated delivery of applications.\n\nIn this role, you will be instrumental in shaping the technological landscape, fostering innovation, and contributing to the success of PlayQโs mission. Youโd be an ideal fit for this role if you:\n\n\n* Possess a visionary mindset, shaping high-level architectures that power the future of gaming infrastructure.\n\n* Inspire and drive global teams of engineers, translating blueprints into resilient, scalable systems with a "can-do" attitude.\n\n* Roll up your sleeves and dive into technical challenges, leading by example and fostering a hands-on and solutions focused approach.\n\n* Display a proven track record of driving results, demonstrably reducing deployment time and optimizing performance\n\n* Communicate effectively across technical and non-technical audiences, collaborating seamlessly with global game and service development teams.\n\n* Possess a passion for learning and innovation, staying ahead of the curve in cloud technologies and DevOps best practices.\n\n* Embrace a fast-paced, collaborative environment, thriving in a culture where ideas are valued and teamwork is paramount.\n\n\n\n\nWhat you'll get to create:\n\n\n* Lead the design, implementation, and maintenance of highly scalable and secure infrastructure solutions, collaborating closely with development teams to automate manual processes and boost overall team efficiency.\n\n* Architect and oversee efficient CI/CD pipelines built with TeamCity and GitHub Actions, ensuring seamless and automated delivery of game and service updates.\n\n* Architect and lead the development of our Python-based interactive Slack bot, ensuring scalability, robustness, and seamless integration with our internal systems and workflows.\n\n* Leverage proficiency in Bash, Python, and Kotlin to build and maintain robust automation scripts, CI/CD pipelines, and IaC configurations.\n\n* Architect and implement a robust, scalable infrastructure for game deployment using Harness, Terraform, and AWS cloud platform tools (EKS, S3, CloudFront, etc.).\n\n* Partner with cross-functional teams and engineers to streamline build and deployment processes, aligning them with business objectives.\n\n* Drive effective communication with stakeholders, providing regular updates and actionable insights into DevOps initiatives.\n\n* Proactively identify and implement innovative CI/CD and IaC tools and processes to continuously optimize workflows and efficiency.\n\n* Manage and document deployment processes and automation, fostering transparency and maintainability.\n\n* Proactively identify and implement innovative CI/CD tools and processes to continuously optimize workflows and efficiency.\n\n* Perform ongoing competitive research into industry best practices.\n\n* Mentor and guide junior team members, fostering a culture of growth and knowledge sharing.\n\n\n\n\n\nWhat you'll bring to the team: \n\n\n* Bachelorโs degree in Computer Science, Information Technology, or a related field.\n\n* 8+ years of Software Development or DevOps experience\n\n* 3+ years of experience as an engineering lead, delegating tasks to key contributors \n\n* 3+ years of experience developing and maintaining Python scripts for automation tasks, CI/CD pipelines, and API integrations\n\n* Proven success in leading multiple engineering projects from execution to completion on time and within budget\n\n* Proven expertise in predicting obstacles and creating project plans that mitigate against risk and disruption\n\n* Strong expertise in cloud platforms such as AWS.\n\n* Expert proficiency in infrastructure as code (IaC) using tools like Terraform or CloudFormation.\n\n* Proven success in leveraging containerization and orchestration tools such as Docker and Kubernetes\n\n* In-depth knowledge of CI/CD and IaC orchestration tools (TeamCity, Harness), version control systems (GitHub), and automated testing\n\n* Solid understanding of security best practices and compliance standards\n\n* Ability to effectively communicate complex, technical information to non-technical teams\n\n* Ability to independently troubleshoot and solution for common engineering challenges\n\n\n\n\nIf you gotten this far, we hope you're feeling excited about this role. Even if you don't feel you meet every single requirement, we still encourage you to apply. We're eager to meet people who believe in PlayQ's mission and can contribute to our team in a variety of waysโnot just candidates who check all the boxes.\n\nWhat we can offer you:\n\n\n* Competitive compensation and equity options\n\n* Comprehensive medical, dental, vision, life, long term disability & pet insurance\n\n* Flexible time off\n\n* Unmatched learning opportunities and mission driven team\n\n* Advancement and mentorship opportunities\n\n* 401K plan with company match\n\n* Flexible Workplace: PlayQโs workplace includes a combination of both in-office, hybrid and remote roles. Our flexible approach to the workplace experience, internal comms, and tech stack promote collaboration, communication, and camaraderie!\n\n* Brand new creative office space equipped with tons of natural light, communal areas for collaboration and free parking\n\n* Walking distance to restaurants, coffee shops, and the metro\n\n* Stocked kitchen with snacks and beverages\n\n* Monthly team building events and cross departmental lunches\n\n* Help build and support awesome GAMES. For a living! Who doesn't love games?\n\n\n\nAt PlayQ, we leverage competitive benchmarking data when setting each roleโs base pay range. Individual base salary will be determined based on job-related factors which may include knowledge, skills, experience, scope of role, business need and geographic location.\nThe estimated base pay range for this role is listed below.\n$135,000-$180,000 USD\nAdditional information:\n\n* In-office location: Santa Monica, CA\n\n* This role is open to remote employees located in select US states including California, Washington, Nevada, Texas, Florida, Pennsylvania, New York and Georgia.\n\n* This role is offered as a full time position.\n\n\n\nInterested? Please get in touch! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, DevOps, Cloud, API, Mobile, Junior, Engineer and Backend jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSanta Monica, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\n\nAt Hiro, we build developer tools that bring Web3 to Bitcoin. Our suite of tools unlocks the full potential of Bitcoin through smart contracts, digital assets, and decentralized applications. With Hiro tools, developers can test and deploy smart contracts, spin up nodes and other server-side resources for scaling, and more. Building on Bitcoin is hard. Hiro's developer tools make it easier. Weโre very proud to say that Hiro won 4 Built Inโs Best Places to Work awards, including U.S. Best Startups to Work For, Remote Best Startups to Work For, Remote Best Places to Work, and New York Best Startups to Work For!\n\nHiro is funded and backed by more than $75 million from Union Square Ventures, Y Combinator, Lux Capital, Winklevoss Capital, Naval Ravikant, and others.\n\n\nAbout The Opportunity\n\nHiro is looking for a passionate and collaborative DevOps Engineer to help scale our infrastructure to meet our growing needs and build โdeveloper empoweringโ solutions. DevOps engineers at Hiro are hybrid systems and software engineers with a two-pronged mission:\n\n* Empower teams to build, test, deploy, and monitor services used by hundreds of thousands of users (soon to be millions) at scale with high velocity and quality.\n\n* Eliminate toil across our engineering, IT, and other operations.\n\n\nWhat You'll Do\n\n\n* Improve the tooling and automation for building, testing, and deploying software and services\n\n* Collaborate with engineering teams on building / launching new products and features\n\n* Evangelize and implement industry best practices to improve the security and ease-of-use of our production environment\n\n* Diagnose problems from all sides and quickly narrow down potential solutions\n\n* Debug production issues across services and multiple levels of the stack\n\n* Improve operational standards, tooling, and processes\n\n* Engineer solutions to automate, and streamline monitoring and incident escalation, improve resiliency and uptime\n\n\n\nWhat We're Looking For\n\n\n* 6+ years of experience in an SRE, DevOps, or equivalent role\n\n* Excellent communication skills and comfort working with a diverse team across time zones\n\n* A sense of ownership, humility, and bias to action\n\n* Able to see a problem from all sides and quickly narrow down potential solutions\n\n* Proficiency with Bash scripting, and preferably one additional language such as Python, Go, or NodeJS\n\n* Proficiency with container technologies like Docker or Containerd\n\n* Strong knowledge on architecting and managing Kubernetes clusters, and deploying a large variety of applications to them\n\n* Experience deploying, scaling, and troubleshooting production services\n\n* Experience with service meshes like Istio/Envoy, Linkerd, and API gateways like Kong and Nginx\n\n* Experience writing infrastructure as code (IaC) with tools like Terraform or Pulumi\n\n* Experience with logging, tracing, metrics, and monitoring dashboard tools such as Grafana, Loki, and Prometheus\n\n* Competence in working with public cloud infrastructure such as GCP, Azure, and/or AWS\n\n\n\nWe'd Also Like to See\n\n\n* Familiarity with running blockchain software (bitcoin, ethereum, etc)\n\n* Hardening, securing the Kubernetes cluster with monitoring and auditing dashboards\n\n* Experience with multiple technology stacks for configuration, monitoring, logging, alerting, CI/CD, application runtimes (Python, Node, Rust)\n\n* Experience with caching and database solutions (e.g. Redis, Postgres)\n\n* Experience with various network protocols, such as HTTP, TLS, DNS, TCP/IP (transport and network layers).\n\n\n\n\nWeโd love to hear from you even if you donโt have experience or interest in every bullet. Thereโs no perfect candidate and we want to find the right fit, even if itโs different than we imagine. \nWhat We'll Offer\n\n\n* Salary Range (regardless of location, benchmarked annually) $175,000 - $195,000.\n\n* Company equity and Stacks (STX) tokens--STX is the native cryptocurrency of the Stacks network\n\n* $1,200/yr budget for learning and development stipend\n\n* $1000/yr of charity donation matching to an organization of your choosing\n\n* $500/mo co-working space reimbursement\n\n* Daily Lunch Reimbursement (even if youโre remote!)\n\n* Open Vacation Policy, take the days you need\n\n* Family-Friendly Health Benefits\n\n* Free Life and Disability Insurance\n\n* Health and dependent care(FSA)\n\n* Up to 16 weeks of paid parental leave\n\n* 401k with 3% match\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Web3, Docker, DevOps, Cloud, API and Engineer jobs that are similar:\n\n
$72,500 — $117,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nNew York City, New York, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.\nYou Will:\n\n\n* Architect and develop data pipelines to optimize performance, quality, and scalability\n\n* Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources\n\n* Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake\n\n* Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance\n\n* Orchestrate sophisticated data flow patterns across a variety of disparate tooling\n\n* Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics\n\n* Partner with the rest of the Data Platform team to set best practices and ensure the execution of them\n\n* Partner with the analytics engineers to ensure the performance and reliability of our data sources\n\n* Partner with machine learning engineers to deploy predictive models\n\n* Partner with the legal and security teams to build frameworks and implement data compliance and security policies\n\n* Partner with DevOps to build IaC and CI/CD pipelines\n\n* Support code versioning and code deployments for data Pipelines\n\n\n\nYou Have:\n\n\n* 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages\n\n* Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed\n\n* Demonstrated experience writing complex, highly optimized SQL queries across large data sets\n\n* Experience with cloud technologies such as AWS and/or Google Cloud Platform\n\n* Experience with Databricks platform\n\n* Experience with IaC technologies like Terraform\n\n* Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres\n\n* Experience building event streaming pipelines using Kafka/Confluent Kafka\n\n* Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker\n\n* Experience with containers and container orchestration tools such as Docker or Kubernetes\n\n* Experience with Machine Learning & MLOps\n\n* Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)\n\n* Thorough understanding of SDLC and Agile frameworks\n\n* Project management skills and a demonstrated ability to work autonomously\n\n\n\nNice to Have:\n\n\n* Experience building data models using dbt\n\n* Experience with Javascript and event tracking tools like GTM\n\n* Experience designing and developing systems with desired SLAs and data quality metrics\n\n* Experience with microservice architecture\n\n* Experience architecting an enterprise-grade data platform\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Testing, DevOps, JavaScript, Cloud, API, Senior, Legal and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nOur mission at Orbital Insight is to understand what weโre doing on and to the Earth. We do this through our cloud-based SaaS platform, Orbital Insight Terrascope and on-premise offering, Terrabox, by ingesting geospatial data at massive scales, then applying state-of-art AI/ML and Data Science algorithms at scale. \n\n\nIn order to achieve this, the cloud infrastructure engineering team is motivated to provide a robust infrastructure layer between the platform and the cloud providers. This software layer is centered around Kubernetes to provide cloud agnostic feasibility and requires deployability on customersโ cloud environments.\n\n\nOur work spans multiple areas including cloud agnostic infrastructure, networking, orchestration, distributed storage, and online/streaming processing. If you enjoy applying computer science fundamentals to real-world challenges and building scalable systems, you will fit right in.\n\n\nIn order to take our government business to the next level we are looking to hire a DevOps Engineer to join our Public Sector team. This DevOps Engineer will work on developing, adapting, and extending our commercial software products to fit the needs of our government customers.\nAt Orbital Insight, we work in cross-functional Agile teams, exploring how geospatial data, data science, AI/deep learning, computer vision, and intimacy with user needs can create entirely new products that give novel insights about what we are doing on and to the earth. Our pioneering products help people answer questions that cannot be answered today.\n\n\nWe value experienced engineers who already have a breadth of experience in multiple areas -- databases, devops, machine learning, API design, and more -- and are eager to learn new areas and new technologies.\nIf all this sounds interesting to you, weโd love to meet you.\n\n\nThe position is remote with as needed to travel to customer sites in the Washington DC area. Occasional travel to corporate headquarters in Palo Alto, California.\n\n\nThis position requires active TS/SCI(DOD) clearance\n\n\n\nResponsibilities\n* Lead deployment and maintenance of our flagship product Terrascope, from GovCloud to JWICS You are the primary engineer for these initiatives\n* Ability to lead a cross functional team for a government engagement including facilitating design and implementation, as well as project management, to meet contractual deliverables\n* Design and develop the software layer between the platform and the cloud providers\n* Understanding the requirements for cloud provider agnostic and air-gapped environments\n* Be responsible for the runtime infrastructure under our production system and all developer resources\n* Automate packaging and testing for releases and bootstrapping\n* Attending technology conferences that will help support learning, as well as, bringing ideas from greater community that can further improve our solutions\n\n\n\nMandatory Qualifications\n* 5 Years Minimum Experience as DevOps Engineer\n* 3 years experience with Kubernetes to deploy, scale and manage containers with configuration using Helm charts\n* Comfortable at command line and working within Linux operating system (preferably RHEL)\n* Experience working with Docker or other Container technologies\n* Experience working with cloud providers such as AWS, GCP, or Azure . Configuring networking (DNS, routing, load balancing) will be a good example\n* Command of a scripting language such as Python or Bash, as well as Git\n* Proficiency and experience with infrastructure as code (IaC) tools, such as Terraform\n\n\n\nPreferred Qualifications\n* Experience with JWICS integration (PKI, NPE Certs)\n* Experience with air-gapped system / on-premise deployment\n* Experience with databases and message queues\n* Experience with Cloud/Kubernetes security \n* Computer science, electrical engineering degree or related experience\n\n\n\n\n$140,000 - $210,000 a yearSalary range includes annual base salary only\n\nAt Orbital Insight, we believe that a diverse workforce that reflects the diversity of our planet is the way to achieve our mission: to understand what is happening on and to the Earth. Orbital Insight is an Equal Employment Opportunity and Affirmative Action Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. We do not accept unsolicited headhunter and agency resumes and will not pay any third-party agency or company that does not have a signed agreement with Orbital Insight. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Docker, Testing, DevOps, Cloud, API, Engineer and Linux jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nArlington, VA
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\n\nAt Hiro, we build developer tools that bring Web3 to Bitcoin. Our suite of tools unlocks the full potential of Bitcoin through smart contracts, digital assets, and decentralized applications. With Hiro tools, developers can test and deploy smart contracts, spin up nodes and other server-side resources for scaling, and more. Building on Bitcoin is hard. Hiro's developer tools make it easier. Weโre very proud to say that Hiro won 4 Built Inโs Best Places to Work awards, including U.S. Best Startups to Work For, Remote Best Startups to Work For, Remote Best Places to Work, and New York Best Startups to Work For!\n\nHiro is funded and backed by more than $75 million from Union Square Ventures, Y Combinator, Lux Capital, Winklevoss Capital, Naval Ravikant, and others.\n\n\nAbout The Opportunity\n\nHiro is looking for a passionate and collaborative DevOps Engineer to help scale our infrastructure to meet our growing needs and build โdeveloper empoweringโ solutions. DevOps engineers at Hiro are hybrid systems and software engineers with a two-pronged mission:\n\n* Empower teams to build, test, deploy, and monitor services used by hundreds of thousands of users (soon to be millions) at scale with high velocity and quality.\n\n* Eliminate toil across our engineering, IT, and other operations.\n\n\nWhat You'll Do\n\n\n* Improve the tooling and automation for building, testing, and deploying software and services\n\n* Collaborate with engineering teams on building / launching new products and features\n\n* Evangelize and implement industry best practices to improve the security and ease-of-use of our production environment\n\n* Diagnose problems from all sides and quickly narrow down potential solutions\n\n* Debug production issues across services and multiple levels of the stack\n\n* Improve operational standards, tooling, and processes\n\n* Engineer solutions to automate, and streamline monitoring and incident escalation, improve resiliency and uptime\n\n\n\nWhat We're Looking For\n\n\n* 6+ years of experience in an SRE, DevOps, or equivalent role\n\n* Excellent communication skills and comfort working with a diverse team across time zones\n\n* A sense of ownership, humility, and bias to action\n\n* Able to see a problem from all sides and quickly narrow down potential solutions\n\n* Proficiency with Bash scripting, and preferably one additional language such as Python, Go, or NodeJS\n\n* Proficiency with container technologies like Docker or Containerd\n\n* Strong knowledge on architecting and managing Kubernetes clusters, and deploying a large variety of applications to them\n\n* Experience deploying, scaling, and troubleshooting production services\n\n* Experience with service meshes like Istio/Envoy, Linkerd, and API gateways like Kong and Nginx\n\n* Experience writing infrastructure as code (IaC) with tools like Terraform or Pulumi\n\n* Experience with logging, tracing, metrics, and monitoring dashboard tools such as Grafana, Loki, and Prometheus\n\n* Competence in working with public cloud infrastructure such as GCP, Azure, and/or AWS\n\n\n\nWe'd Also Like to See\n\n\n* Familiarity with running blockchain software (bitcoin, ethereum, etc)\n\n* Hardening, securing the Kubernetes cluster with monitoring and auditing dashboards\n\n* Experience with multiple technology stacks for configuration, monitoring, logging, alerting, CI/CD, application runtimes (Python, Node, Rust)\n\n* Experience with caching and database solutions (e.g. Redis, Postgres)\n\n* Experience with various network protocols, such as HTTP, TLS, DNS, TCP/IP (transport and network layers).\n\n\n\n\nWeโd love to hear from you even if you donโt have experience or interest in every bullet. Thereโs no perfect candidate and we want to find the right fit, even if itโs different than we imagine. \nWhat We'll Offer\n\n\n* Salary Range (regardless of location, benchmarked annually) $175,000 - $195,000.\n\n* Company equity and Stacks (STX) tokens--STX is the native cryptocurrency of the Stacks network\n\n* $1,200/yr budget for learning and development stipend\n\n* $1000/yr of charity donation matching to an organization of your choosing\n\n* $500/mo co-working space reimbursement\n\n* Daily Lunch Reimbursement (even if youโre remote!)\n\n* Open Vacation Policy, take the days you need\n\n* Family-Friendly Health Benefits\n\n* Free Life and Disability Insurance\n\n* Health and dependent care(FSA)\n\n* Up to 16 weeks of paid parental leave\n\n* 401k with 3% match\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Web3, Docker, DevOps, Cloud, API and Engineer jobs that are similar:\n\n
$65,000 — $115,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nNew York City, New York, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.