The Digital Modernization Sector has an exciting career opportunity for an Kubernetes Engineer Colorado Springs, CO to support the US Space Forceโs Space Systems Command (SSC), Operational Command and Control Acquisition Delta, known as Kobayashi Maru. This role is instrumental in the development and deployment of mission critical software for space defense, space domain awareness, and enabling data services. This role Primary Responsibilities: Design, implement, and maintain highly available Kubernetes clusters across cloud and on-prem environments. Automate infrastructure provisioning, monitoring, and scaling using Infrastructure as Code (IaC) and CI/CD pipelines. Develop and manage Helm charts for application deployment and configuration management. Deploy and manage applications on cloud platforms such as Azure, AWS, Google Cloud, and Oracle Cloud Infrastructure (OCI). Monitor and troubleshoot Kubernetes workloads, networking, and persistent storage solutions. Implement Kubernetes security best practices, including RBAC, network policies, and container runtime security. Optimize performance and reliability of containerized applications in distributed systems. Collaborate with development, security, and operations teams to enhance DevOps workflows and cloud-native application delivery. Integrate Kubernetes with service meshes, logging, and observability tools such as Istio, Prometheus, Grafana, and ELK Stack. Participate in system upgrades, disaster recovery planning, and compliance initiatives such as NIST, CIS Benchmarks, and FedRAMP. Mentor junior engineers and contribute to knowledge sharing within the organization. Basic Qualifications: Requires BS and 8+ years of prior relevant experience or Masters with 6+ years of prior relevant experience, additional years of experience will be accepted in lieu of a degree. Minimum 5+ years of experience working with Kubernetes in production environments. Must have a DoD-8570 IAT Level 2 baseline certification (Security+ CE or equivalent) to start and maintain. Must have Certified Kubernetes Application Developer (CKAD), and Azure Certified DevOps Engineer โ Professional or equivalent cloud certifications. Strong expertise in Kubernetes administration, troubleshooting, and performance tuning. Hands-on experience with cloud platforms (AWS, Azure, Google Cloud) and their Kubernetes services (EKS, AKS, GKE). Proficiency in containerization technologies like Docker and container runtime management. Solid understanding of Infrastructure as Code (Terraform, Ansible, CloudFormation). Experience with CI/CD pipelines using tools like GitLab CI/CD, Jenkins, ArgoCD, or Tekton. Deep knowledge of Kubernetes networking (Calico, Cilium, Istio, or Linkerd) and storage solutions (Ceph, Portworx, Longhorn). Expertise in monitoring and logging with Prometheus, Grafana, ELK, or OpenTelemetry. Strong scripting skills in Bash, Python, or Golang for automation. Familiarity with Kubernetes security best practices, including Pod Security Standards, RBAC, and image scanning tools (Trivy, Aqua, or Falco). Experience with GitOps methodologies (ArgoCD, FluxCD) Knowledge of serverless computing and Kubernetes-based event-driven architectures Familiarity with service meshes and API gateways (Istio, Envoy, Traefik) Hands-on experience with AWS, Azure, or Google Cloud Platform security tools and configurations. Proficiency in cloud security frameworks such as CSA CCM (Cloud Controls Matrix), FedRAMP, or similar. Experience embedding security in CI/CD pipelines using tools like Jenkins, GitLab, or GitHub Actions. Experience with automation tools (e.g., Terraform, Ansible, or CloudFormation) and scripting languages (e.g., Python, PowerShell, or Bash). Extensive experience with containerization and orchestration platforms like Kubernetes. Strong analytical and problem-solving skills with the ability to communicate complex technical concepts to non-technical stakeholders. Knowledge of hybrid cloud networking (e.g., VPNs, ExpressRoute, Direct Connect). Experience with DevSecOps pipelines and integration Experience working in agile development and DevOps-driven environments. US Citizen and Possession of a current Active DoD TS/SCI Clearance Preferred Qualifications Masterโs degree in computer science Multi-Cluster & Hybrid Deployments โ Experience managing federated or multi-cluster Kubernetes environments across hybrid and multi-cloud architectures. Custom Kubernetes Operators โ Developing and maintaining Kubernetes Operators using the Operator SDK (Go, Python, or Ansible). Cluster API (CAPI) Expertise โ Experience with Cluster API for managing Kubernetes lifecycle across cloud providers. Advanced Scheduling & Tuning โ Custom scheduling, affinity/anti-affinity rules, and performance optimization for workloads. Kubernetes Hardening โ Deep knowledge of CIS benchmarks, PodSecurityPolicies (PSP), and Kyverno or Open Policy Agent (OPA). Original Posting: March 31, 2025 For U.S. Positions: While subject to change based on business needs, Leidos reasonably anticipates that this job requisition will remain open for at least 3 days with an anticipated close date of no earlier than 3 days after the original posting date as listed above. Pay Range: Pay Range $104,650.00 - $189,175.00 The Leidos pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law. Leidos Leidos is a Fortune 500ยฎ innovation company rapidly addressing the world's most vexing challenges in national security and health. The company's global workforce of 47,000 collaborates to create smarter technology solutions for customers in heavily regulated industries. Headquartered in Reston, Virginia, Leidos reported annual revenue of approximately $15.4 billion for the fiscal year ended December 29, 2023. For more information visit www.Leidos.com. Pay and Benefits Pay and benefits are fundamental to any career decision. That's why we craft compensation packages that reflect the importance of the work we do for our customers. Employment benefits include competitive compensation, Health and Wellness programs, Income Protection, Paid Leave and Retirement. More details are available here. Securing Your Data Leidos will never ask you to provide payment-related information at any part of the employment application process. And Leidos will communicate with you only through emails that are sent from a Leidos.com email address. If you receive an email purporting to be from Leidos that asks for payment-related information or any other personal information, please report the email to [email protected]. Commitment and Diversity All qualified applicants will receive consideration for employment without regard to sex, race, ethnicity, age, national origin, citizenship, religion, physical or mental disability, medical condition, genetic information, pregnancy, family structure, marital status, ancestry, domestic partner status, sexual orientation, gender identity or expression, veteran or military status, or any other basis prohibited by law. Leidos will also consider for employment qualified applicants with criminal histories consistent with relevant laws. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, DevOps, Cloud, API, Junior, Golang and Engineer jobs that are similar:\n\n
$60,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\n6314 Remote/Teleworker US
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe are tech transformation specialists, we are CI&T.\n\n\nWe combine the disruptive power of Artificial Intelligence with human expertise to support large companies in navigating changes in technology and business. With 30 years of experience, 6,000 workers, offices in 10 countries and talents across 5 continents. We operate in the fields of design, strategy, and engineering for global brands, helping clients achieve the full potential of technology as a force for good. Impact is what we deliver.\n\n\nHi There, \nThis is Juliana from CI&T! I am a Talent Attracting Analyst looking for people located in Brazil for a Junior SRE Position to work on a international project. \n\n\n\n\nResponsibilities:\n- Collaborate with CI&T to support an international client, acting as an SRE (Site Reliability Engineer).\n- Your main focus will be to implement processes and tools related to SRE practices.\n- Create solutions in partnership with the client that ensure the reliability, availability, performance, and resilience of web platforms.\n- Work with processing and analyzing logs and metrics to seek preventive and standardized monitoring solutions that fit the clientโs environment.\n- Share knowledge and assist in the development of team members working in the SRE domain.\n\n\nRequirements for this challenge:\n- Good communication skills, always working in partnership with the client.\n- Experience in programming.\n- Knowledge in infrastructure.\n- Experience with CI/CD tools.\n- Practices related to DevOps culture and SRE discipline.\n- Cloud experience (Azure preferred).\n- Familiarity with REST APIs, Docker.\n- Monitoring and analysis of applications and infrastructure.\n- Processing and analysis of distributed logs.\n\n\n#LI-JP3\n##EntryLevel\n\n\n\n\n\n\n\n\nOur benefits:\n- Health plan and dental plan;\n- Meal allowances;\n- Childcare assistance;\n- Extended parenting leave;\n- Gympass/Wellhub\n- Annual profit-sharing distribution;\n- Life insurance;\n- Partnership with an online mental health platform;\n- CI&T University;\n- Discount Club;\n- Support Program: psychological guidance; nutritionist and more;\n- Pregnancy course and responsible parenthood;\n- Partnership with online course platforms;\n- Platform for language learning;\n- And many others.\nMore details about it: https://ciandt.com/us/en-us/careers\n\n\nCI&T is an equal-opportunity employer. We celebrate and appreciate the diversity of our CI&Tersโ identities and lived experiences. We are committed to building, promoting, and retaining a diverse, inclusive, and equitable company and culture focused on creating a better tomorrow.\n\n\nAt CI&T, we recognize that innovation and transformation only happen in diverse, inclusive, and safe work environments. Our teams are most impactful when people from all backgrounds and experiences collaborate to share, create, and hear ideas. \nBefore applying for our opportunities take a look at Conflict of Interest Policy on website.\n\n\nWe strongly encourage candidates from diverse and underrepresented communities to apply for our vacancies. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Cloud and Non Tech jobs that are similar:\n\n
$50,000 — $95,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAirDNA began with a big dream in a balmy California garage in 2015. Since then, the technology startup has grown into the leading provider of data and business intelligence for the billion-dollar travel and vacation rental industryโwith offices in Denver and Barcelona. \n\n\nOur self-serve platform eliminates guesswork and equips Airbnb hosts with smart and competitive insights needed to succeed in the ever-evolving short-term rental landscape. \n\n\nWe also arm enterprise clients with customized reports and in-depth dashboards to ensure they can scale and invest strategically. These customers include hundreds of top financial institutions, real estate companies, vacation rental managers, and destination marketing organizations around the world. \n\n\nWe track the daily performance of over 10 million Airbnb and Vrbo properties across 120,000 global markets. We also collect data from over a million partner properties. This marriage of scraped and source data, enhanced by our proprietary algorithms, makes our solutions the most accurate and comprehensive in the world. \n\n\nWeโre firm believers that data isnโt the destination; itโs the starting point. The launchpad. The bedrock for any future-forward business.\n\n\nThe Role: \n Come join our Platform team and help drive our growth by designing, maintaining, and improving our platform and processes. The ideal person for this role is driven to design robust, scalable, secure infrastructure, cares about the details, and enjoys helping both individual engineers and their teams work more effectively.\n\n\n\nHere's what you'll get to do: \n* Build and maintain monitoring, logging, and telemetry solutions for proactive performance and reliability management (experience with Datadog, Prometheus, Grafana).\n* Evaluate and integrate new technologies to enhance platform capabilities, especially around containers, databases, and cloud-native architectures.\n* Ensure security, compliance, and cost optimization in all platform solutions, utilizing tools like IAM, GuardDuty, and AWS Security Hub.\n* Design, implement, and manage scalable infrastructure solutions using AWS services (EC2, S3, RDS, Lambda, CloudFront, etc.).\n* Manage, scale, and optimize multiple databases (PostgresQL + Druid) to ensure performance, availability, and redundancy.\n* Collaborate with development and operations teams to streamline release processes and integrate best practices for infrastructure as code (Terraform, CloudFormation).\n* Work closely with stakeholders to identify infrastructure needs and lead initiatives to scale the platform in alignment with business goals.\n* Drive continuous improvement in the platformโs architecture and processes, optimizing for performance, reliability, and operational efficiency.\n* Collaborate with cross-functional teams to align platform development with product goals and strategies.\n* Mentor and guide junior team members, providing technical leadership and driving best practices across the platform team.\n\n\n\nHere's what you'll need to be successful: \n* Strong familiarity with Amazon Web Services, multi-account experience preferred.\n* Expertise using Docker and Kubernetes.\n* Experience with developing and maintaining CI/CD pipelines to automate application deployment and infrastructure provisioning.\n* Able to diagnose and troubleshoot problems in a distributed microservice environment.\n* Solid understanding of TCP/IP networking.\n* Expertise with Linux (prefer Ubuntu, Alpine, and/or Amazon Linux).\n* Understanding of DevOps practices.\n* Demonstrated experience in managing or leading platform teams, with the ability to grow the team and develop talent within.\n\n\n\nHere's what would be nice to have:\n* Gitlab pipelines\n* ArgoCD\n* Linkerd, Istio or other service mesh\n* ELK stack or similar logging platforms\n* Ansible or other configuration management tools\n* Cloudformation or other IaC tools\n* JSON/YAML\n* OpenVPN\n* Apache Airflow\n* Databases (postgres and Druid preferred)\n* Cloudflare\n* Atlassian tools such as Jira, Confluence, StatusPage\n* Programming experience: shell scripting, Python, Golang preferred\n* Experience with performance optimization of distributed microservices\n\n\n\nHere's what you can expect from us: \n* Competitive cash compensation and benefits, the salary range for this position is $150,000 - $180,000 per year. \n\nColorado Salary Statement: The salary range displayed in specifically for those potential hired who will work or reside in the state of Colorado if selected for this role. Any offered salary is determined based on internal equity, internal salary ranges, market data/ranges, applicantโs skills and prior relevant experience, certain degrees and certifications. \nBenefits include:\n* Medical, dental, and vision packages to meet your needs\n* Unlimited vacation policy; take time when you need it \n* Eligible for Companyโs annual discretionary bonus program\n* 401K with employer match up to 4%\n* Continuing education stipend\n* 16 weeks of paid parental leave\n* New MacBooks for employees\n\n\nOffice Perks for Denver Based Employees:\n* Commuter/RTD benefit\n* Quarterly team outings\n* In-office lunch Tuesday - Thursday\n* We have a great office located just a few blocks away from Union Station in the heart of Denverโs historic LoDo neighborhoodโhigh ceilings, exposed brick, a fully stocked kitchen (snacks, espresso, etc.), and plenty of meeting rooms and brainstorming nooks\n* Pet-friendly!\n\n\n\n\n\n\nAirDNA seeks to attract the best-qualified candidates who support the mission, vision and values of the company and those who respect and promote excellence through diversity. We are committed to providing equal employment opportunities (EEO) to all employees and applicants without regard to race, color, creed, religion, sex, age, national origin, citizenship, sexual orientation, gender identity and expression, physical or mental disability, marital, familial or parental status, genetic information, military status, veteran status or any other legally protected classification. The company complies with all applicable state and local laws governing nondiscrimination in employment and prohibits unlawful harassment based on any of the aforementioned protected classes at every location in which the company operates. This applies to all terms, conditions and privileges of employment including but not limited to: hiring, assessments, probation, placement, benefits, promotion, demotion, termination, layoff, recall, transfer, leave of absence, compensation, training and development, social and recreational programs, education assistance and retirement. \n\n\nWe are committed to making our application process and workplace accessible for individuals with disabilities. Upon request, AirDNA will reasonably accommodate applicants so they can participate in the application process unless doing so would create an undue hardship to AirDNA or a threat to these individuals, others in the workplace or the company as a whole. To request accommodation, please email [email protected]. Please allow for 24 hours to process your request. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, Docker, DevOps, Education, Senior, Marketing, Golang and Engineer jobs that are similar:\n\n
$50,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Coalfire\nCoalfire is on a mission to make the world a safer place by solving our clientsโ toughest cybersecurity challenges. We work at the cutting edge of technology to advise, assess, automate, and ultimately help companies navigate the ever-changing cybersecurity landscape. We are headquartered in Denver, Colorado with offices across the U.S. and U.K., and we support clients around the world. \nBut thatโs not who we are โ thatโs just what we do. \n \nWe are thought leaders, consultants, and cybersecurity experts, but above all else, we are a team of passionate problem-solvers who are hungry to learn, grow, and make a difference. \n \nAnd weโre growing fast. \n \nWeโre looking for a Site Reliability Engineer I to support our Managed Services team. \n\n\nPosition Summary\nAs a Junior Site Reliability Engineer at Coalfire within our Managed Services (CMS) group, you will be a self-starter, passionate about cloud technology, and thrive on problem solving. You will work within major public clouds, utilizing automation and your technical abilities to operate the most cutting-edge offerings from Cloud Service Providers (CSPs). This role directly supports leading cloud software companies to provide seamless reliability and scalability of their SaaS product to the largest enterprises and government agencies around the world.\n \nThis can be a remote position (must be located in the United States).\n\n\n\nWhat You'll Do\n* Become a member of a highly collaborative engineering team offering a unique blend of Cloud Infrastructure Administration, Site Reliability Engineering, Security Operations, and Vulnerability Management across multiple clients.\n* Coordinate with client product teams, engineering team members, and other stakeholders to monitor and maintain a secure and resilient cloud-hosted infrastructure to established SLAs in both production and non-production environments.\n* Innovate and implement using automated orchestration and configuration management techniques. Understand the design, deployment, and management of secure and compliant enterprise servers, network infrastructure, boundary protection, and cloud architectures using Infrastructure-as-Code.\n* Create, maintain, and peer review automated orchestration and configuration management codebases, as well as Infrastructure-as-Code codebases. Maintain IaC tooling and versioning within Client environments.\n* Implement and upgrade client environments with CI/CD infrastructure code and provide internal feedback to development teams for environment requirements and necessary alterations. \n* Work across AWS, Azure and GCP, understanding and utilizing their unique native services in client environments.\n* Configure, tune, and troubleshoot cloud-based tools, manage cost, security, and compliance for the Clientโs environments.\n* Monitor and resolve site stability and performance issues related to functionality and availability.\n* Work closely with client DevOps and product teams to provide 24x7x365 support to environments through Client ticketing systems.\n* Support definition, testing, and validation of incident response and disaster recovery documentation and exercises.\n* Participate in on-call rotations as needed to support Client critical events, and operational needs that may lay outside of business hours.\n* Support testing and data reviews to collect and report on the effectiveness of current security and operational measures, in addition to remediating deviations from current security and operational measures.\n* Maintain detailed diagrams representative of the Clientโs cloud architecture.\n* Maintain, optimize, and peer review standard operating procedures, operational runbooks, technical documents, and troubleshooting guidelines\n\n\n\nWhat You'll Bring\n* BS or above in related Information Technology field or equivalent combination of education and experience\n* 2+ years experience in 24x7x365 production operations\n* ยทFundamental understanding of networking and networking troubleshooting.\n* 2+ years experience installing, managing, and troubleshooting Linux and/or Windows Server operating systems in a production environment.\n* 2+ years experience supporting cloud operations and automation in AWS, Azure or GCP (and aligned certifications)\n* 2+ years experience with Infrastructure-as-Code and orchestration/automation tools such as Terraform and Ansible\n* Experience with IaaS platform capabilities and services (cloud certifications expected)\n* Experience within ticketing tool solutions such as Jira and ServiceNow\n* Experience using environmental analytics tools such as Splunk and Elastic Stack for querying, monitoring and alerting\n* Experience in at least one primary scripting language (Bash, Python, PowerShell)\n* Excellent communication, organizational, and problem-solving skills in a dynamic environment\n* Effective documentation skills, to include technical diagrams and written descriptions\n* Ability to work as part of a team with professional attitude and demeanor\n\n\n\nBonus Points\n* Previous experience in a consulting role within dynamic, and fast-paced environments\n* Previous experience supporting a 24x7x365 highly available environment for a SaaS vendor\n* Experience supporting security and/or infrastructure incident handling and investigation, and/or system scenario re-creation\n* Experience working within container orchestration solutions such as Kubernetes, Docker, EKS and/or ECS\n* Experience working within an automated CI/CD pipeline for release development, testing, remediation, and deployment\n* Cloud-based networking experience (Palo Alto, Cisco ASAv, etc.โฆ)\n* Familiarity with frameworks such as FedRAMP, FISMA, SOC, ISO, HIPAA, HITRUST, PCI, etc.\n* Familiarity with configuration baseline standards such as CIS Benchmarks & DISA STIG\n* Knowledge of encryption technologies (SSL, encryption, PKI)\n* Experience with diagramming (Visio, Lucid Chart, etc.) \n* Application development experience for cloud-based systems\n\n\n\n\n\nWhy You'll Want to Join Us\n\n\nAt Coalfire, youโll find the support you need to thrive personally and professionally. In many cases, we provide a flexible work model that empowers you to choose when and where youโll work most effectively โ whether youโre at home or an office. \nRegardless of location, youโll experience a company that prioritizes connection and wellbeing and be part of a team where people care about each other and our communities. Youโll have opportunities to join employee resource groups, participate in in-person and virtual events, and more. And youโll enjoy competitive perks and benefits to support you and your family, like paid parental leave, flexible time off, certification and training reimbursement, digital mental health and wellbeing support membership, and comprehensive insurance options. \n\n\nAt Coalfire, equal opportunity and pay equity is integral to the way we do business. A reasonable estimate of the compensation range for this role is $95,000 to $110,000 based on national salary averages. The actual salary offer to the successful candidate will be based on job-related education, geographic location, training, licensure and certifications and other factors. You may also be eligible to participate in annual incentive, commission, and/or recognition programs.All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. \n \n#LI-REMOTE \n#LI-JB1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, DevOps, Cloud, Junior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nUnited States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe currently seeking a Senior Data Engineer with 5-7 yearsโ experience. The ideal candidate would have the ability to work independently within an AGILE working environment and have experience working with cloud infrastructure leveraging tools such as Apache Airflow, Databricks, DBT and Snowflake. A familiarity with real-time data processing and AI implementation is advantageous. \n\n\n\nResponsibilities:\n* Design, build, and maintain scalable and robust data pipelines to support analytics and machine learning models, ensuring high data quality and reliability for both batch & real-time use cases.\n* Design, maintain, optimize data models and data structures in tooling such as Snowflake and Databricks. \n* Leverage Databricks for big data processing, ensuring efficient management of Spark jobs and seamless integration with other data services.\n* Utilize PySpark and/or Ray to build and scale distributed computing tasks, enhancing the performance of machine learning model training and inference processes.\n* Monitor, troubleshoot, and resolve issues within data pipelines and infrastructure, implementing best practices for data engineering and continuous improvement.\n* Diagrammatically document data engineering workflows. \n* Collaborate with other Data Engineers, Product Owners, Software Developers and Machine Learning Engineers to implement new product features by understanding their needs and delivery timeously. \n\n\n\nQualifications:\n* Minimum of 3 years experience deploying enterprise level scalable data engineering solutions.\n* Strong examples of independently developed data pipelines end-to-end, from problem formulation, raw data, to implementation, optimization, and result.\n* Proven track record of building and managing scalable cloud-based infrastructure on AWS (incl. S3, Dynamo DB, EMR). \n* Proven track record of implementing and managing of AI model lifecycle in a production environment.\n* Experience using Apache Airflow (or equivalent) , Snowflake, Lucene-based search engines.\n* Experience with Databricks (Delta format, Unity Catalog).\n* Advanced SQL and Python knowledge with associated coding experience.\n* Strong Experience with DevOps practices for continuous integration and continuous delivery (CI/CD).\n* Experience wrangling structured & unstructured file formats (Parquet, CSV, JSON).\n* Understanding and implementation of best practices within ETL end ELT processes.\n* Data Quality best practice implementation using Great Expectations.\n* Real-time data processing experience using Apache Kafka Experience (or equivalent) will be advantageous.\n* Work independently with minimal supervision.\n* Takes initiative and is action-focused.\n* Mentor and share knowledge with junior team members.\n* Collaborative with a strong ability to work in cross-functional teams.\n* Excellent communication skills with the ability to communicate with stakeholders across varying interest groups.\n* Fluency in spoken and written English.\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, Senior, Junior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.