\nWho we are:\n\nRaft (https://TeamRaft.com) is a customer-obsessed non-traditional small business with a purposeful focus on Distributed Data Systems, Platforms at Scale, and Complex Application Development, with headquarters in McLean, VA. Our range of clients includes innovative federal and public agencies leveraging design thinking, cutting-edge tech stack, and cloud-native ecosystem. We build digital solutions that impact the lives of millions of Americans.\nWeโre looking for an experienced DevSecOps Engineer to support our customers and join our passionate team of high-impact problem solvers.\n\nAbout the role: \n\nAs a DevSecOps Engineer, you are responsible for actively developing and implementing end-to-end cluster and application lifecycles, ensuring processes are both secure and efficient. Collaborate with clients, designing and implementing Kubernetes or Docker solutions. Take part in building CI/CD pipelines, ensuring a smooth software update process. Furthermore, actively participate in the application of GitOps principles for software delivery.\n\nWhat we are looking for:\n\n\n3+ years of Python software development experience\n\n3+ years of Bash scripting experience\n\n3+ years of Linux System Administration experience\n\n3+ years of hands-on experience with Kubernetes or Docker, provisioning production clustersโฏand maintaining their compliance\n\n3+ years of automated DevOps (ex. Building pipelines) or cloud infrastructure (ex. AWS, Azure) experience\n\n3+ years of automating builds for Docker or OCI containers\n\nSkilled in building Gitlab CICD pipeline templates and jobs\n\nAbility to implement and improve development and security best practices by building necessary CICD pipeline jobs (linting, SCA, SAST, vulnerability scanning)\n\nFamiliarity with cosign for signing and verifying container images and attestations\n\nExperienced in developing, troubleshooting, maintaining build automation for applications and images, and developing end-to-end application testing\n\nProven background in software systems development via CI/CD pipelines (Gitlab Pipelines)\n\nExposure to Agile, DevOps, and DevSecOps methodologies, practices, and culture\n\nStrong knowledge of version control systems like Gitlab, with the ability to maintain/operate Gitlab in the cloud or on-premises\n\nProblem-solving aptitude\n\nExpertise in designing and implementing enterprise-grade, scalable cloud-based services to support our development teams\n\nFamiliarity with GitOps tools, i.e., FluxCD, ArgoCD\n\nKnowledge of Log Management and Analytics tools such as PLG/Splunk/ELK\n\nBackground in building, deployment, release automation, or orchestration\n\nProficiency in writing Unit/Integration/e2e testing (e.g., Junit, Cypress, Selenium)\n\nSkilled in Infrastructure as Code (e.g., Terraform, Ansible, etc.)\n\nObtain Security+ within the first 90 days of employment with Raft\n\n\n\n\nHighly preferred: \n\n\n2+ years of experience with Spring Boot (Java development), in particular the Spring Cloud Gateway library.\n\nProficiency in building CLI applications in Python\n\nSkilled in working with file scanning applications (e.g., virus scanning tools)\n\nExperience using FastAPI to develop web applications in Python\n\nExpertise in implementing Sigstore and Cosign to sign container images as well as SBOMs\n\nSkilled in hardening application containers\n\nProven background with Istio service mesh\n\nBackground in defensive or offensive cyber capability development\n\nPassionate about automation, system efficiency, and security\n\n\n\n\nClearance Requirements:\n\n\nActive Top Secret security clearance with SCI Eligibility\n\n\n\n\nWork Type: \n\n\nRemote (Local to San Antonio, TX)\n\nMay require up to 10% travel\n\n\n\n\nSalary Range:\n\n\n$90,000 - $170,000\n\nThe determination of compensation is predicated upon a candidate's comprehensive experience, demonstrated skill, and proven abilities\n\n\n\n\nWhat we will offer you: \n\n\nHighly competitive salary\n\nFully covered healthcare, dental, and vision coverage\n\n401(k) and company match\n\nTake as you need PTO + 11 paid holidays\n\nEducation & training benefits\n\nAnnual budget for your tech/gadgets needs\n\nMonthly box of yummy snacks to eat while doing meaningful work\n\nRemote, hybrid, and flexible work options\n\nTeam off-site in fun places!\n\nGenerous Referral Bonuses\n\nAnd More!\n\n\n\n\nOur Vision Statement: \n\nWe bridge the gap between humans and data through radical transparency and our obsession with the mission. \n\nOur Customer Obsession: \n\nWe will approach every deliverable like it's a product. We will adopt a customer-obsessed mentality. As we grow, and our footprint becomes larger, teams and employees will treat each other not only as teammates but customers. We must live the customer-obsessed mindset, always. This will help us scale and it will translate to the interactions that our Rafters have with their clients and other product teams that they integrate with. Our culture will enable our success and set us apart from other companies.\n\nHow do we get there? \n\nPublic-sector modernization is critical for us to live in a better world. We, at Raft, want to innovate and solve complex problems. And, if we are successful, our generation and the ones that follow us will live in a delightful, efficient, and accessible world where out-of-box thinking, and collaboration is a norm. \n\nRaftโs core philosophy is Ubuntu: I Am, Because We are. We support our โnadiโ by elevating the other Rafters. We work as a hyper collaborative team where each team member brings a unique perspective, adding value that did not exist before. People make Raft special. We celebrate each other and our cognitive and cultural diversity. We are devoted to our practice of innovation and collaboration. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Docker, DevOps, Cloud and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Antonio, Texas, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Coalfire\nCoalfire is on a mission to make the world a safer place by solving our clientsโ toughest cybersecurity challenges. We work at the cutting edge of technology to advise, assess, automate, and ultimately help companies navigate the ever-changing cybersecurity landscape. We are headquartered in Denver, Colorado with offices across the U.S. and U.K., and we support clients around the world. \nBut thatโs not who we are โ thatโs just what we do. \n \nWe are thought leaders, consultants, and cybersecurity experts, but above all else, we are a team of passionate problem-solvers who are hungry to learn, grow, and make a difference. \n \nAnd weโre growing fast. \n \nWeโre looking for a Site Reliability Engineer I to support our Managed Services team. \n\n\nPosition Summary\nAs a Junior Site Reliability Engineer at Coalfire within our Managed Services (CMS) group, you will be a self-starter, passionate about cloud technology, and thrive on problem solving. You will work within major public clouds, utilizing automation and your technical abilities to operate the most cutting-edge offerings from Cloud Service Providers (CSPs). This role directly supports leading cloud software companies to provide seamless reliability and scalability of their SaaS product to the largest enterprises and government agencies around the world.\n \nThis can be a remote position (must be located in the United States).\n\n\n\nWhat You'll Do\n* Become a member of a highly collaborative engineering team offering a unique blend of Cloud Infrastructure Administration, Site Reliability Engineering, Security Operations, and Vulnerability Management across multiple clients.\n* Coordinate with client product teams, engineering team members, and other stakeholders to monitor and maintain a secure and resilient cloud-hosted infrastructure to established SLAs in both production and non-production environments.\n* Innovate and implement using automated orchestration and configuration management techniques. Understand the design, deployment, and management of secure and compliant enterprise servers, network infrastructure, boundary protection, and cloud architectures using Infrastructure-as-Code.\n* Create, maintain, and peer review automated orchestration and configuration management codebases, as well as Infrastructure-as-Code codebases. Maintain IaC tooling and versioning within Client environments.\n* Implement and upgrade client environments with CI/CD infrastructure code and provide internal feedback to development teams for environment requirements and necessary alterations. \n* Work across AWS, Azure and GCP, understanding and utilizing their unique native services in client environments.\n* Configure, tune, and troubleshoot cloud-based tools, manage cost, security, and compliance for the Clientโs environments.\n* Monitor and resolve site stability and performance issues related to functionality and availability.\n* Work closely with client DevOps and product teams to provide 24x7x365 support to environments through Client ticketing systems.\n* Support definition, testing, and validation of incident response and disaster recovery documentation and exercises.\n* Participate in on-call rotations as needed to support Client critical events, and operational needs that may lay outside of business hours.\n* Support testing and data reviews to collect and report on the effectiveness of current security and operational measures, in addition to remediating deviations from current security and operational measures.\n* Maintain detailed diagrams representative of the Clientโs cloud architecture.\n* Maintain, optimize, and peer review standard operating procedures, operational runbooks, technical documents, and troubleshooting guidelines\n\n\n\nWhat You'll Bring\n* BS or above in related Information Technology field or equivalent combination of education and experience\n* 2+ years experience in 24x7x365 production operations\n* ยทFundamental understanding of networking and networking troubleshooting.\n* 2+ years experience installing, managing, and troubleshooting Linux and/or Windows Server operating systems in a production environment.\n* 2+ years experience supporting cloud operations and automation in AWS, Azure or GCP (and aligned certifications)\n* 2+ years experience with Infrastructure-as-Code and orchestration/automation tools such as Terraform and Ansible\n* Experience with IaaS platform capabilities and services (cloud certifications expected)\n* Experience within ticketing tool solutions such as Jira and ServiceNow\n* Experience using environmental analytics tools such as Splunk and Elastic Stack for querying, monitoring and alerting\n* Experience in at least one primary scripting language (Bash, Python, PowerShell)\n* Excellent communication, organizational, and problem-solving skills in a dynamic environment\n* Effective documentation skills, to include technical diagrams and written descriptions\n* Ability to work as part of a team with professional attitude and demeanor\n\n\n\nBonus Points\n* Previous experience in a consulting role within dynamic, and fast-paced environments\n* Previous experience supporting a 24x7x365 highly available environment for a SaaS vendor\n* Experience supporting security and/or infrastructure incident handling and investigation, and/or system scenario re-creation\n* Experience working within container orchestration solutions such as Kubernetes, Docker, EKS and/or ECS\n* Experience working within an automated CI/CD pipeline for release development, testing, remediation, and deployment\n* Cloud-based networking experience (Palo Alto, Cisco ASAv, etc.โฆ)\n* Familiarity with frameworks such as FedRAMP, FISMA, SOC, ISO, HIPAA, HITRUST, PCI, etc.\n* Familiarity with configuration baseline standards such as CIS Benchmarks & DISA STIG\n* Knowledge of encryption technologies (SSL, encryption, PKI)\n* Experience with diagramming (Visio, Lucid Chart, etc.) \n* Application development experience for cloud-based systems\n\n\n\n\n\nWhy You'll Want to Join Us\n\n\nAt Coalfire, youโll find the support you need to thrive personally and professionally. In many cases, we provide a flexible work model that empowers you to choose when and where youโll work most effectively โ whether youโre at home or an office. \nRegardless of location, youโll experience a company that prioritizes connection and wellbeing and be part of a team where people care about each other and our communities. Youโll have opportunities to join employee resource groups, participate in in-person and virtual events, and more. And youโll enjoy competitive perks and benefits to support you and your family, like paid parental leave, flexible time off, certification and training reimbursement, digital mental health and wellbeing support membership, and comprehensive insurance options. \n\n\nAt Coalfire, equal opportunity and pay equity is integral to the way we do business. A reasonable estimate of the compensation range for this role is $95,000 to $110,000 based on national salary averages. The actual salary offer to the successful candidate will be based on job-related education, geographic location, training, licensure and certifications and other factors. You may also be eligible to participate in annual incentive, commission, and/or recognition programs.All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. \n \n#LI-REMOTE \n#LI-JB1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, DevOps, Cloud, Junior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nUnited States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWhat Youโll Do:\n\nWeโre looking for a talented and intensely curious Senior AWS Cloud Engineer who is nimble and focused with a startup mentality. In this newly created role you will be the liaison between data engineers, data scientists and analytics engineers. You will work to create cutting-edge architecture that provides the increased performance, scalability and concurrency for Data Science and Analytics workflows.\n\n Responsibilities\n\n\n* Provide AWS Infrastructure support and Systems Administration in support of new and existing products implemented thru: IAM, EC2, S3, AWS Networking (VPC, IGW, NGW, ALB, NLB, etc.), Terraform, Cloud Formation templates and Security: Security Groups, Guard Duty, Cloud Trail, Config and WAF.\n\n* Monitor and maintain production, development, and QA cloud infrastructure resources for compliance with all Six Pillars of AWS Well-Architected Framework - including Security Pilar.\n\n* Develop and maintain Continuous Integration (CI) and Continuous Deployment (CD) pipelines needed to automate testing and deployment of all production software components as part of a fast-paced, agile Engineering team. Technologies required: ElastiCache, Bitbucket Pipelines, Github, Docker Compose, Kubernetes, Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS) and Linux based server instances.\n\n* Develop and maintain Infrastructure as Code (IaC) services for creation of ephemeral cloud-native infrastructure hosted on Amazon Web Services (AWS) and Google Cloud Platform (GCP). Technologies required: AWS AWS Cloud Formation, Google Cloud Deployment Manager, AWS SSM, YAML, JSON, Python.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 99.99% uptime. Technologies required: AWS IAM, AWS Cloud Watch, AWS Event Bridge, AWS SSM, AWS SQS, AWS SNS, AWS Lambda and Step Functions, Python, Java, RDS Postgres, RDS MySQL, AWS S3, Docker, AWS Elasticsearch, Kibana, AWS Amplify.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 100% cybersecurity compliance and surveillance. Technologies required: AWS SSM, YAML, JSON, Python, RDS Postgres, Tenable, CrowdStrike EPP, Sophos EPP, Wiz CSPM, Linux Bash scripts.\n\n* Design and code technical solutions that improve the scalability, performance, and reliability of all Data Acquisition pipelines. Technologies required: Google ADs APIs, Youtube Data APIs, Python, Java, AWS Glue, AWS S3, AWS SNS, AWS SQS, AWS KMS, AWS RDS Postgres, AWS RDS MySQL, AWS Redshift.\n\n* Monitor and remediate server and application security events as reported by CrowdStrike EPP, Tenable, WIZ CSPM, Invicti\n\n\n\n\nWho you are:\n\n\n* Minimum of 5 years of System Administration or Devops Engineering experience on AWS\n\n* Track record of success in System Administration, including System Design, Configuration, Maintenance, and Upgrades\n\n* Excels in architecting, designing, developing, and implementing cloud native AWS platforms and services.\n\n* Knowledgeable in managing cloud infrastructure in a production environment to ensure high availability and reliability.\n\n* Proficient in automating system deployment, operation, and maintenance using Infrastructure as Code - Ansible, Terraform, CloudFormation, and other common DevOps tools and scripting.\n\n* Experienced with Agile processes in a structured setting required; Scrum and/or Kanban.\n\n* Security and compliance standards experience such as PCI and SOC as well as data privacy and protection standards, a big plus.\n\n* Experienced in implementing Dashboards and data for decision-making related to team and system performance, rely heavily on telemetry and monitoring.\n\n* Exceptional analytical capabilities.\n\n* Strong communication skills and ability to effectively interact with Engineering and Business Stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Bachelor's degree in technology, engineering, or related field\n\n* AWS Certifications โ Solutions Architect, DevOps Engineer etc.\n\n\n\n\nWhy Spotter:\n\n\n* Medical and vision insurance covered up to 100%\n\n* Dental insurance\n\n* 401(k) matching\n\n* Stock options\n\n* Complimentary gym access\n\n* Autonomy and upward mobility\n\n* Diverse, equitable, and inclusive culture, where your voice matters\n\n\n\n\nIn compliance with local law, we are disclosing the compensation, or a range thereof, for roles that will be performed in Culver City. Actual salaries will vary and may be above or below the range based on various factors including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. A reasonable estimate of the current pay range is: $100-$500K salary per year. The range listed is just one component of Spotterโs total compensation package for employees. Other rewards may include an annual discretionary bonus and equity. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, Testing, DevOps, Cloud, Senior, Engineer and Linux jobs that are similar:\n\n
$50,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLos Angeles, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for HealthVerity and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
How you will help\nAs a part of our DevOps team, working alongside our product and development teams, you will own and be responsible for architecting and implementing scalable solutions that offer our customers new insights on their own businesses. Youโll use the best tools for the job -whether modern and revolutionary or time tested and proven- to deliver elegant, scalable solutions that meet business and technical needs. Your team will support you and you the same. Peer review of solutions and implementations is expected. You will play an integral part in building the foundation of HealthVerityโs future.ย \nย \nWhat you will do\nโข Design, deployment and maintenance of multiple AWS environments\nโข Automation of build, deployment and testing of internal tools, typically using python or shell\nโข Troubleshooting and performance monitoring of applications, databases, data processing servers and associated storage systems\nโข Network analysis and architecture design with a mind toward maintainability, scalability, and security in an AWS environment\nโข Work with the engineering team to develop robust and scalable platforms and workflows for new analytics processes\nโข Evaluate and integrate cloud services and tools\nโข Develop, document, and implement migration plans for continuous improvement of existing products and services\nโขย Automate log collection, storage, and analysis\nโขย Implement and maintain organizational security policies\n\nAbout you\nโข You make security a priority in everything you do\nโข You are data-driven; testing and measuring every step\nโข You are an expert at monitoring and measuring systems and identifying bottlenecks\nโข You automate everything\nโข You leverage concepts such as impermanence to build highly reliable infrastructure\nโข You are comfortable using distributed version control like Git.\nโข You are comfortable reading Linux shell scripts, Python, Go, Java, JavaScript, Perl or similar languages\nย \nDesired skills and experience\nโข Expert level Linux system administration skills\nโข Experience supporting scalable applications on distributed architectures, storing large data sets and supporting analytics\nโข Management and scaling of traditional RDBMS and/or NoSQL\nโข Deployment of cloud based infrastructures such as AWS, Rackspace, Joyent, etc as well as on your own iron\nโข Hands on experience with Chef or Puppet as well as containers such as Docker\nโข Experience with large scale data management and processing systems such as Hadoop, Cassandra, Redis, Spark, Riak\nโข 3+ years of IT and business/industry work experience\nโข Added bonus: understanding of Healthcare IT standards such as HL7 HQMF, HL7 CDA-based standards, EMR and FHIR\n\nAbout HealthVerity\nPharmaceutical manufacturers, payers and government organizations have partnered with HealthVerity to solve some of their most complicated use cases through transformative technologies and real-world data infrastructure. The HealthVerity IPGE platform, based on the foundational elements of Identity, Privacy, Governance and Exchange, enables the discovery of RWD across the broadest healthcare data ecosystem, the building of more complete and accurate patient journeys and the ability to power best-in-class analytics and applications with flexibility and ease. Together with our partners, HealthVerity has built the modern way to data for the health insights economy. To learn more about the HealthVerity IPGE platform, visit www.healthverity.com.\n\nOur company challenges\nโข Empowering clients with highly rewarding data discovery and licensing tools\nโข Ingesting and managing billions of healthcare records from a wide variety of partners\nโข Standardizing on common data models across data types\nโข Orchestrating an industry-leading HIPAA privacy layer\nโข Innovating our proprietary de-identification and data science algorithms\nโข Building a culture that supports rapid iteration and new possibilities\n\nWe have big plans\nThe infrastructure and culture we are building will provide an environment that cultivates innovation. We want to move fast knowing we can fix anything we break along the way. If a new need arises, we want to turn around a solution quickly. We want to solve our challenges in ways that create even more possibilities. Weโve created a platform that will scale to support an ever-growing array of data providers and innovative products and services. You must be able to think big while still delivering on near-term requirements.\n\nWe pride ourselves on ensuring that each team member at HealthVerity feels connected, validated and heard. Fromย Philadelphia to Manhattan Beach, our success is driven by recognizing that a team is made up of individuals. We offer a robust set of benefits and perks to everyone. View details on our careers page.\n\nHealthVerity is an equal opportunity employer devoted to inclusion in the workplace. We believe incorporating different ideas, perspectives and backgrounds make us stronger and encourages an environment where ageism, racism, sexism, ableism, homophobia, transphobia or any other form of discrimination are not tolerated. At HealthVerity, weโre working towards an innovative and connected future for healthcare data and believe the future is better together. We can only do that if everyone has a seat at the table. Read our Equity Inclusion and Diversity Statement.\n\nIf you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to [email protected]\n\nHealthVerity offers in-office and remote options, so you can work from anywhere within the US!ย #LI-Remote \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Engineer, Cloud, Perl, Python and Linux jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Storable and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nStorable is looking for an experienced DevOps Engineer that can build culture and automation to empower our software delivery teams to deploy continuously. You will help manage our AWS environment, as well as engineer and maintain a fully automated CInD pipeline that builds and deploys over 60 containerized microservices. In addition, you will develop internal standards and work with software delivery teams to provide them the autonomy necessary to deliver, test and monitor our applications using DevOps best practices. Our team is comprised of individuals across multiple states that support products ranging from Websites to container based microservices and Windows desktop applications.\n\nAll applicants must be currently authorized to work in the United States on a full-time basis.\n\nLocation: Remote\n\n\n* Storable is a distributed-first company, but is only registered for employment in a certain number of states. In order to be considered, you must permanently reside in the following states to be eligible for employment:\n\n\n\n\n\n* Primary States: Texas, Colorado, North Carolina, Tennessee, Florida, Illinois, Indiana, Kansas, Missouri, Pennsylvania, Georgia, Arizona, Alabama, Mississippi Louisiana, South Carolina, Oklahoma, Wyoming, Nebraska, Wisconsin, Virginia\n\n\n\n\nAbout Us\n\nAt Storable, we believe storage operators should have one partner they can trust to help you get the results they need for their business. That’s why we’ve built the industry’s first fully integrated platform that offers facility management software, facility websites, marketing programs and services, payments, and deeply integrated tenant insurance capabilities all in one solution.\n\nWe leverage our platform in combination with our over 25 years of storage industry expertise to help our thousands of storage customers achieve their tenant experience and operational efficiency objectives every single day.\n\nWhat You'll Do\n\n\n* Provide training and lead engineering teams towards DevOps best practices.\n\n* Maintain pipelines and automation for delivering code to microservice based environments with GitLab.\n\n* Build tooling to support infrastructure as code with Terraform.\n\n* Utilize Envoy and DataDog APM to proactively monitor and correct application issues.\n\n* Build infrastructure repeatability and deployment pipelines for business analytics platforms.\n\n* Lead efforts to improve our site reliability and ensure engineering teams have the tools to help.\n\n* Facilitate and help design our docker orchestration migration (Rancher to Kubernetes).\n\n* Support container based initiatives across multiple products.\n\n* Accelerate DevOps tools and practice adoption for legacy applications.\n\n* Standardize current logging solution to DataDog.\n\n* Maintain AWS resources and design cloud native AWS architecture.\n\n\n\n\nWhat You'll Need\n\n\n* 5+ years of professional experience in Linux system administration and experience administering Linux in a cloud based environment\n\n* 2+ years of professional experience with one or more scripting languages: Bash, Python, Ruby\n\n* 2+ years of professional experience with Docker and container orchestration\n\n* AWS administration experience\n\n* Webmaster experience\n\n* Working knowledge of relational databases: Postgres, MySql\n\n* Flexibility to operate in an environment with changing demands and priorities\n\n* Comfortable with version control software and using Git workflows\n\n* Experience with configuration management tools\n\n* Experience in uptime reporting and monitoring\n\n\n\n\nBonus Points\n\n\n* AWS Certification (Developer or Solutions Architect)\n\n* Linux Certification (ex: LPIC, CompTIA)\n\n* Docker container orchestration experience (ex: Kubernetes, Swarm, Rancher)\n\n* Experience with monitoring/reporting tools (ex: PagerDuty, Pingdom, Nagios, Datadog)\n\n* Familiarity with platform logging solutions (ex: loggly, logstash, Elastic stack, DataDog)\n\n* Understand network security and implications\n\n* Experience in SRE best practices\n\n* Experience in a containerized CI/CD environment (ex: Gitlab, CircleCI)\n\n* Experience with Infrastructure as Code (Cloudformation, Terraform, Ansible)\n\n* Familiarity in Windows environments for cross product support\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Engineer, Cloud, Git, Marketing and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for DataServ and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nPosition Description:\n The AWS Cloud Engineer II works under the direction of the Enterprise Architect and with other DevOps resources to complete the migration of the DataServ Software as a Service (SaaS) platform to Amazon Web Service (AWS). This position reports to the Director, Technology.\n\n\nRESPONSIBILITIES: \n\nEssential Functions\n\n\n* Designs and executes low level detailed plans for migrating workloads to AWS.\n\n* Incorporates sustainable, reusable infrastructure and application automation as an every-day practice.\n\n* Implements monitoring and alerting.\n\n* Designs, develops, and implements Cloud solutions as part of the software development life cycle.\n\n* Documents and trains others on capabilities built and troubleshooting processes.\n\n* Analyzes and fixes system, process and data issues experienced by DataServ solutions and DataServ clients. Works as Tier 3 support for ITO to resolve issues.\n\n* Proactively reports issues and communicates status to affected parties regularly especially when there are roadblocks or successful completion is at risk.\n\n* Provides documented, thorough, solution alternatives and recommendations on issues and process improvements.\n\n* Acts as a coach and teacher to team members to build organizational AWS knowledge.\n\n\n\n\n\nAdditional Functions\n\n\n* Works with the Enterprise Architect and DevOps resources to conduct proof of concepts.\n\n* Assists with network, infrastructure, and database administration/support.\n\n* Demonstrates and communicates successes as well as failures to the greater IT organization.\n\n\n\n\n\nCompetencies\n\n\n* Communication\n\n* Technical Expertise / Knowledge\n\n* Teamwork\n\n* Problem Solving / Results\n\n* Trust / Ethical Practice\n\n\n\n\n\nWORK ENVIRONMENT\n\nThis job operates in a professional office environment. This role routinely uses standard office equipment such as computers, printers, telephones, photocopiers, and filing cabinets. As a Software-as-a-Service (SaaS) company, employee must be technically savvy with the ability to use the computer/keyboard to conduct business.\n\n\nPHYSICAL DEMANDS\n\nThe physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job.\n\n\nWhile performing the duties of this job, the employee is occasionally required to stand; walk; sit; use hands to finger, handle, or feel objects, tools or controls; reach with hands and arms; talk or hear. The employee must occasionally lift or move items up to 20 pounds.\n\n\nPOSITION TYPE/EXPECTED HOURS OF WORK\n\nThis is a full-time, exempt position with days of work Monday through Friday and with hours scheduled around core hours of operation. Occasional evening and weekend work may be required as job duties demand.\n\n\nTRAVEL\n\nLittle to no travel is expected for this position.\n\n\nREQUIRED EDUCATION/EXPERIENCE\n\n\n* Bachelor’s degree in Computer Science, Management Information Systems or comparable experience\n\n* 5 years of technical professional experience\n\n* Experience in configuration management, Windows and Linux system management and Network skills\n\n* Experience with programming and infrastructure automation (e.g. Power Shell, Bash, Python, Perl).\n\n* Experience with monitoring, alerting, and analytics tools.\n\n* Proven analytical thinking and problem-solving skills.\n\n* Available for after-hours and weekend support as needed.\n\n\n\n\n\nPREFERRED EDUCATION/EXPERIENCE\n\n\n* 1 year of experience delivering cloud solutions (AWS preferred)\n\n* Experience designing, developing, deploying, and testing in AWS.\n\n* Experience with building and managing highly automated, secure, high availability multi-tier solutions using a wide array of AWS services including but not limited to: CloudWatch, IAM, EC2, VPC, EBS, ELB, SNS, SQS, SES. S3, RDS, ElasticSearch,\n\n* Experience with migration of applications and data from any platform to cloud environments especially SQL Server to RDS.\n\n* Experience with Systems Thinking\n\n* Must exhibit strong written, verbal, and listening communication skills.\n\n* Ability and desire to thrive in a team-oriented, fast-paced environment.\n\n\n\n\n\nAAP/EEO STATEMENT\n\nDataServ is an Equal Opportunity Employer/Vet/Disabled. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, DevOps, Amazon, Travel and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Paddle and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nAs a Site Reliability Engineer, you’ll be helping to drive our product and engineering department forward, ensuring reliability on different parts of the Paddle platform and helping our Engineers to work better and more efficiently.\n\nPaddle SRE team’s role is “Everything SRE”, with a focus on infrastructure, reliability standards, and practices. By following this model:\n\n\n* It’s easy to spot patterns and draw similarities between services and projects.\n\n* We act as a glue between disparate product teams, creating solutions out of distinct pieces of software.\n\n* Enable product engineers to use DevOps practices to maintain user-facing products without divergence in practice across the business.\n\n* Define production standards as code and work to smooth out any sharp edges to greatly simplify things for the product engineers running their services.\n\n\n\n\nWhat you'll do\n\n\n* Be on the on-call rotation to respond to incidents\n\n* Handle production incidents, author blameless postmortems and enrich operational playbooks and runbooks\n\n* Create, maintain and test our system recovery process\n\n* Monitoring, alerting and SLO tracking\n\n* Developing tools to maximise engineering efficiency such as automating the deployment infrastructure\n\n* Be an advocate of the GitOps methodology\n\n* Collaborate and enable engineers to do their job more efficiently\n\n* Seek out processes that can be improved with automation\n\n\n\n\nWe'd love to hear from you if you:\n\n\n* Have AWS experience, we use ECS/Fargate, EC2, RDS, S3 and Lambda \n\n* Have a software engineering background, and ideally experience with Go, which we use for our tooling\n\n* Knowledge of platform and ops concepts such as networking and Linux administration\n\n* Experience working with microservices and distributed systems at scale\n\n* Experience with monitoring tools: we use NewRelic, Grafana, ELK, Pingdom and PagerDuty.\n\n\n\n\nWhy you’ll love working at Paddle\n\nWe are a diverse team of around 140 people and care deeply about enabling a great culture which is inclusive no matter your background. We celebrate our diverse group of talented employees and we pride ourselves on our transparent, collaborative, friendly and respectful culture.\n\nWe offer a full slate of benefits, including competitive salaries, stock options, pension plans, private healthcare and on-site coaching sessions. We believe in flexible working and offer all team members unlimited holidays and 3 months paid parental leaves regardless of gender. We value learning and will help you with your personal development where we can — from constant exposure to new challenges and annual learning stipend to regular internal and external training.\n\nAbout us\n\nOur mission is to help software companies succeed — enabling them to focus on creating products the world loves. Thousands of companies rely on our revenue delivery platform to sell their software products globally, as well as our powerful analytics and marketing tools to understand and grow their businesses.\n\nOur vision is to become the platform that all software companies use to run and grow their business. We aim to replace a fragmented ecosystem of specialised tools with a unified platform that removes the complex burden that comes with running a software business, whilst also providing unparalleled insight to help them grow faster.\n\nDeloitte Fast 50 named us amongst the fastest growing software companies in the UK four years running, and we’ve raised over $93m in funding from incredible investors such as FTV Capital, Kindred, Notion, and 83North\n\nEqual opportunities\n\nWe believe in having diverse teams in which everyone can be their authentic self is key to our success. We encourage people from underrepresented backgrounds to apply and we don't discriminate based on race, colour, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, marital status, disability or age. Our office is wheelchair friendly and we are a family-friendly employer. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Admin, Engineer, Sys Admin, DevOps, Marketing and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for YouGov and want to re-open this job? Use the edit link in the email when you posted the job!
\nCrunch.i, part of the YouGov PLC is a market-defining company in the analytics SaaS marketplace. We’re a company on the rise. We’ve built a revolutionary platform that transforms our customers’ ability to drive insight from market research and survey data. We offer a complete survey data analysis platform that allows market researchers, analysts, and marketers to collaborate in a secure, cloud-based environment, using a simple, intuitive drag-and-drop interface to prepare, analyze, visualize and deliver survey data and analysis. Quite simply, Crunch provides the quickest and easiest way for anyone, from CMO to PhD, with zero training, to analyze survey data. Users create tables, charts, graphs and maps. They filter, and slice-and-dice survey data directly in their browser.\n\nOur start-up culture is casual, respectful of each other’s varied backgrounds and lives, and high-energy because of our shared dedication to our product and our mission. We are loyal to each other and our company. We value work/life balance, efficiency, simplicity, and fantastic customer service! Crunch has no offices and fully embraces a 100% remote culture. We have 40 employees spread across 5 continents. Remote work at Crunch is flexible and largely independent, yet highly cooperative.\n\nWe are hiring a DevOps Lead to help expand our platform and operations excellence. We are inviting you to join our small, fully remote team of developers and operators helping make our platform faster, more secure, and more reliable. You will be self-motivated and disciplined in order to work with our fully distributed team.\n\nWe are looking for someone who is a quick study, who is eager to learn and grow with us, and who has experience in DevOps and Agile cultures. At Crunch, we believe in learning together: we recognize that we don’t have all the answers, and we try to ask each other the right questions. As Crunch employees are completely distributed, it’s crucial that you can work well independently, and keep yourself motivated and focused.\n\nOur Stack:\n\nWe currently run our in-house production Python code against Redis, MongoDB, and ElasticSearch services. We proxy API requests through NGINX, load balance with ELBs, and deploy our React web application to AWS CloudFront CDN. Our current CI/CD process is built around GitHub, Jenkins, BlueOcean including unit, integration, and end to end tests and automated system deployments. We deploy to auto-scaling Groups using Ansible and Cloud-Init.\n\nIn the future, all or part of our platform may be deployed via DroneCI, Kubernetes, nginx ingress, Helm, and Spinnaker.\n\nWhat you'll do:\n\nAs a Leader:\n\n\n* Manage and lead a team of Cloud Operations Engineers who are tasked with ensuring our uptime guarantees to our customer base.\n\n* Scale the worldwide Cloud Operations Engineering team with the strategic implementation of new processes and tools.\n\n* Hire and ramp exceptional Cloud Operations Engineers.\n\n* Assist in scoping, designing and deploying systems that reduce Mean Time to Resolve for customer incidents.\n\n* Inform executive leadership and escalation management personnel of major outages\n\n* Compile and report KPIs across the full company.\n\n* Work with Sales Engineers to complete pre-sales questionnaires, and to gather customer use metrics.\n\n* Prioritize projects competing for human and computational resources to achieve organizational goals.\n\n\n\n\nAs an Engineer:\n\n\n* Monitor and detect emerging customer-facing incidents on the Crunch platform; assist in their proactive resolution, and work to prevent them from occurring.\n\n* Coordinate and participate in a weekly on-call rotation, where you will handle short term customer incidents (from direct surveillance or through alerts via our Technical Services Engineers).\n\n* Diagnose live incidents, differentiate between platform issues versus usage issues across the entire stack; hardware, software, application and network within physical datacenter and cloud-based environments, and take the first steps towards resolution.\n\n* Automate routine monitoring and troubleshooting tasks.\n\n* Cooperate with our product management and engineering organizations by identifying areas for improvement in the management of applications powering the Crunch infrastructure.\n\n* Provide consistent, high-quality feedback and recommendations to our product managers and development teams regarding product defects or recurring performance issues.\n\n* Be the owner of our platform. This includes everything from our cloud provider implementation to how we build, deploy and instrument our systems.\n\n* Drive improvements and advancements to the platform in areas such as container orchestration, service mesh, request/retry strategies.\n\n* Build frameworks and tools to empower safe, developer-led changes, automate the manual steps and provide insight into our complex system.\n\n* Work directly with software engineering and infrastructure leadership to enhance the performance, scalability and observability of resources of multiple applications and ensure that production hand off requirements are met and escalate issues.\n\n* Embed into SRE projects to stay close to the operational workflows and issues.\n\n* Evangelize the adoption of best practices in relation to performance and reliability across the organization.\n\n* Provide a solid operational foundation for building and maintaining successful SRE teams and processes.\n\n* Maintain project and operational workload statistics.\n\n* Promote a healthy and functional work environment.\n\n* Work with Security experts to do periodic penetration testing, and drive resolution for any issues discovered.\n\n* Liaise with IT and Security Team Leads to successfully complete cross-team projects, filling in for these Leads when necessary.\n\n* Administer a large portfolio of SaaS tools used throughout the company.\n\n\n\n\nQualifications:\n\n\n* Team Lead experience of an on-call DevOps, SRE, or Cloud Operations team (at least 2 years).\n\n* Experience recruiting, mentoring, and promoting high performing team members.\n\n* Experience being an on-call DevOps, SRE, or Cloud Operations engineer (at least 2 years).\n\n* Proven track record of designing, building, sizing, optimizing, and maintaining cloud infrastructure.\n\n* Proven experience developing software, CI/CD pipelines, automation, and managing production infrastructure in AWS.\n\n* Proven track record of designing, implementing, and maintaining full CI/CD pipelines in a cloud environment (Jenkins experience preferred).\n\n* Experience with containers and container orchestration tools (Docker, Kubernetes, Helm, traefik, Nginx ingress and Spinnaker experience preferred).\n\n* Expertise with Linux system administration (5 yrs) and networking technologies including IPv6.\n\n* Knowledgeable about a wide range of web and internet technologies.\n\n* Knowledge of NoSQL database operations and concepts.\n\n* Experience in monitoring, system performance data collection and analysis, and reporting.\n\n* Capability to write small programs/scripts to solve both short-term systems problems and to automate repetitive workflows (Python and Bash preferred).\n\n* Exceptional English communication and troubleshooting skills.\n\n* A keen interest in learning new things.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Executive, React, English, Elasticsearch, Cloud, NoSQL, Python, API, Sales, SaaS, Engineer, Nginx and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Selerity and want to re-open this job? Use the edit link in the email when you posted the job!
\nSummary:\n\nWe are looking for a Senior Java DevOps Engineer to join Selerity’s team, scaling up an A.I. driven analytics and recommendation platform and integrating it into enterprise workflows. Highly competitive compensation plus significant opportunities for professional growth and career advancement.\n\nEmployment Type: Contract or Full-time\n\nLocation is flexible: We have offices in New York City and Oak Park, Illinois (Chicago suburb) but about half of our team currently works remotely from various parts of Europe, North America, and Asia. \n\nJob Description:\n\nWant to change how the world engages with chat, research, social media, news, and data?\n\nSelerity has dominated ultra-low-latency data science in finance for almost a decade. Now our real-time content analytics and contextual recommendation platform is gaining broader traction in enterprise and media applications. We're tackling big challenges in predictive analytics, conversational interfaces, and workflow automation and need your help!\n\nWe’re looking for an experienced DevOps Engineer to join a major initiative at a critical point in our company’s growth, assisting in the architecture, development, and maintenance of our stack. The majority of Selerity’s applications are developed in Java and C++ on Linux but knowledge of other languages (especially Python, JavaScript, and Scala), platforms and levels of the stack is very helpful.\n\nMust-haves:\n\n * Possess a rock-solid background in Computer Science (minimum BS in Comp Sci or related field) + at least 5 years (ideally 10+) of challenging work experience.\n\n * Implementation of DevOps / SRE processes at scale including continuous integration (preferred: Jenkins), automated testing, and platform monitoring (preferred: JMX, Icinga, Grafana, Graphite).\n\n * Demonstrated proficiency building and modifying Java applications in Linux environments (using Git, SVN), and ideally also a C++ developer.\n\n * Significant orchestration expertise with the Ansible (preferred), Chef, or Puppet deployment automation system in a Cloud environment (at least a dozen servers, ideally more).\n\n * Direct experience in the design, implementation, and maintenance of SaaS APIs in Java that are minimal, efficient, scalable, and supportable throughout their lifecycle (OpenLDAP).\n\n * Solid track record of making effective design decisions balancing near-term and long-term objectives.\n\n * Know when to use commercial or open-source solutions, when to delegate to a teammate, and when to roll up your sleeves and code it yourself.\n\n * Work effectively in agile teams with remote members; get stuff done with minimal guidance and zero BS, help others, and know when to ask for help.\n\n * Clearly communicate complex technical and product issues to non-technical team members, managers, clients, etc. \n\n\n\nNice-to-haves:\n\n * Proficiency with Cisco, Juniper, and other major network hardware platforms, as well as ISO layer 1 and 2 protocols.\n\n * Experience with Internet routing protocols such as BGP.\n\n * Implementation of software defined networking or other non-traditional networking paradigms.\n\n * Proficiency with SSL, TLS, PGP, and other standard crypto protocols and systems.\n\n * Full-stack development and operations experience with web apps on Node.js.\n\n * Experience with analytics visualization libraries.\n\n * Experience with large-scale analytics and machine learning technologies including TensorFlow/Sonnet, Torch, Caffe, Spark, Hadoop, cuDNN, etc. running in production.\n\n * Conversant with relational, column, object, and graph database fundamentals and strong practical experience in any of those paradigms.\n\n * Deep understanding of how to build software agents and conversational workflows.\n\n * Experience with additional modern programming languages (Python, Scala, …)\n\n\n\nOur stack:\n\n* Java, C++, Python, JavaScript/ECMAscript + Node, Angular, RequireJS, Electron, Scala, etc.\n\n* A variety of open source and in-house frameworks for natural language processing and machine learning, including artificial neural networks / deep learning.\n\n* Hybrid of AWS (EC2, S3, RDS, R53) + dedicated datacenter network, server and GPU/coprocessor infrastructure.\n\n* Cassandra, Aurora plus in-house streaming analytics pipeline (similar to Apache Flink) and indexing/query engine (similar to ElasticSearch).\n\n* In-house messaging frameworks for low-latency (sub-microsecond sensitivity) multicast and global-scale TCP (similarities to protobufs/FixFast/zeromq/itch).\n\n* Ansible, Git, Subversion, PagerDuty, Icinga, Grafana, Observium, LDAP, Jenkins, Maven, Purify, VisualVM, Wireshark, Eclipse, Intellij.\n\nThis position offers a great opportunity to work with advanced technologies, collaborate with a top-notch, global team, and disrupt a highly visible, multi-billion-dollar market. \n\n\n\nCompensation:\n\nWe understand how to attract and retain the best talent and offer a competitive mix of salary, benefits and equity. We also understand how important it is for you to feel challenged, to have opportunities to learn new things, to have the flexibility to balance your work and personal life, and to know that your work has impact in the real world.\n\nWe have team members on four continents and we're adept at making remote workers feel like part of the team. If you join our NYC main office be sure to bring your Nerf toys, your drones and your maker gear - we’re into that stuff, too.\n\nInterview Process:\n\nIf you can see yourself at Selerity, send your resume and/or online profile (e.g. LinkedIn) to [email protected]. We’ll arrange a short introductory phone call and if it sounds like there’s a match we'll arrange for you to meet the team for a full interview. \n\nThe interview process lasts several hours and is sometimes split across two days on site, or about two weeks with remote interviews. It is intended to be challenging - but the developers you meet and the topics you’ll be asked to explain (and code!) should give you a clear sense of what it would be like to work at Selerity. \n\nWe value different perspectives and have built a team that reflects that diversity while maintaining the highest standards of excellence. You can rest assured that we welcome talented engineers regardless of their age, gender, sexual orientation, religion, ethnicity or national origin.\n\nRecruiters: Please note that we are not currently accepting referrals from recruiters for this position. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Java, Senior, Engineer, Crypto, Finance, Cloud, SaaS, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for MapD and want to re-open this job? Use the edit link in the email when you posted the job!
\nMapD is seeking a Site Reliability Engineer to add to its Cloud Operations and Security team. As a Site Reliability Engineer, you will work closely with other SREs to maintain, optimize, develop, and secure the delivery of the world’s first GPU-accelerated analytics SaaS platform. You should have solid automation, security, and DevOps skills to bring to the table, and experience working in environments with compliance requirements (eg. SOC 2, FEDRAMP, PCI, etc.). This would be a major plus. You must have previous proven experience working on a public cloud platform (e.g. AWS, GCP, Azure, etc.). Key to this role is being self-motivated and a self-starter, as well as a strong passion for system improvement and optimization. \n\nWe’re big fans of hiring people who are not just great at what they do but also how they do it. Critical to our culture is building and maintaining a team that works well together and knows how to communicate effectively - not just within their own team, but also across peripheral teams. \n\nWe don’t believe in divas or rock stars. We are looking for someone who embodies the best parts of open-source culture: Humility, open-mindedness, positivity, and respect for others. A team member who doesn’t try to single-handedly save the day but embraces input and collaboration as a means to find the best solution.\n\nYour success in this role will be predicated on your ability to prioritize your work, be self-motivated and a self-starter, to speak up early and often, and to work well with others. You should be passionate about building highly available, scalable, and automated “hands-off” systems for customers, and being one of the “go-to” people that team members can trust to get things done and keep things running smoothly with a minimum of fuss. We’re great at encouraging our people to learn different technologies, continue their professional growth, and try out new ways of doing things. We’re in it for the long-haul, and you should be too. \n\nOur office is located in downtown San Francisco, and this position will initially report to the Director of Cloud Operations and Information Security. This is an individual contributor role and will not manage other people. While our preference is to hire local employees, we will also consider exceptional candidates for remote work. \n\nResponsibilities:\n\n\n* Build highly available automated scalable microservices with a high quality of service for customers\n\n* Write and maintain customized toolsets for leading-edge GPU technologies\n\n* Implement and maintain service monitoring and reporting\n\n* Incident response and investigation, along with recommending and implementing changes to resolve problems permanently\n\n* Keeping all cloud services running reliably 24x7\n\n\n\n\nQualifications:\n\n\n* BS or higher degree in Computer Science, or equivalent industry or work experience.\n\n* 2+ years previous SRE/DevOps experience; previous experience in an enterprise SaaS software environment strongly preferred.\n\n* Strong desire to continue to learn and improve systems, yourself, and others.\n\n* Previous experience with a public cloud provider required (AWS, GCP, Azure, etc.)\n\n* Previous experience working on Linux OS (as well as Bash/Python programming skills) is required.\n\n* Previous experience with clustered Docker environments.\n\n* Previous experience with cloud secrets management, service discovery and scheduling.\n\n* Previous experience with CI ((eg. Jenkins) and CM (eg. Ansible, SaltStack, etc.) automation tools.\n\n* A passion for automation and security.\n\n* Strong desire to continue to learn and improve systems, yourself, and others.\n\n\n\n\nBonuses:\n\n\n* Design and implementation of security controls, and understanding good security practices\n\n* Experience in working in and designing systems for environments with compliance requirements (eg. SOC 2, FEDRAMP)\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Admin, Engineer, Sys Admin, DevOps, Cloud, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Selerity and want to re-open this job? Use the edit link in the email when you posted the job!
\nSummary:\n\nWe are looking for a Senior DevOps (Site Reliability) Engineer to join Selerity’s team, scaling up an A.I. driven analytics and recommendation platform and integrating it into enterprise workflows. Highly competitive compensation plus significant opportunities for professional growth and career advancement.\n\nEmployment Type: Contract or Full-time\n\nLocation is flexible: We have offices in New York City and Oak Park, Illinois (Chicago suburb) but about half of our team currently works remotely from various parts of Europe, North America, and Asia. \n\n\nJob Description:\n\nWant to change how the world engages with chat, research, social media, news, and data?\n\nSelerity has dominated ultra-low-latency data science in finance for almost a decade. Now our real-time content analytics and contextual recommendation platform is gaining broader traction in enterprise and media applications. We're tackling big challenges in predictive analytics, conversational interfaces, and workflow automation and need your help!\n\nWe’re looking for an experienced DevOps (Site Reliability) Engineer to join a major initiative at a critical point in our company’s growth. The majority of Selerity’s applications are developed in Java and C++ on Linux but knowledge of other languages (especially Python and JavaScript), platforms and levels of the stack is very helpful.\n\n\n\nMust-haves:\n\n * Possess a rock-solid background in Computer Science (minimum BS in Comp Sci or related field) + at least 5 years (ideally 10+) of challenging work experience.\n\n * Implementation of DevOps / SRE processes at scale including continuous integration (preferred: Jenkins), automated testing, and platform monitoring (preferred: JMX, Icinga, Grafana, Graphite).\n\n * Demonstrated proficiency building and modifying Java and C++ applications in Linux environments (using Git, SVN). \n\n * Significant operations expertise with the Ansible (preferred), Chef, or Puppet deployment automation system in a Cloud environment.\n\n * Direct experience in the design, implementation, and maintenance of SaaS APIs that are minimal, efficient, scalable, and supportable throughout their lifecycle (OpenLDAP).\n\n * Solid track record of making effective design decisions balancing near-term and long-term objectives.\n\n * Know when to use commercial or open-source solutions, when to delegate to a teammate, and when to roll up your sleeves and code it yourself.\n\n * Work effectively in agile teams with remote members; get stuff done with minimal guidance and zero BS, help others, and know when to ask for help.\n\n * Clearly communicate complex technical and product issues to non-technical team members, managers, clients, etc. \n\n\n\nNice-to-haves:\n\n * Proficiency with Cisco, Juniper, and other major network hardware platforms, as well as ISO layer 1 and 2 protocols.\n\n * Experience with Internet routing protocols such as BGP.\n\n * Implementation of software defined networking or other non-traditional networking paradigms.\n\n * Proficiency with SSL, TLS, PGP, and other standard crypto protocols and systems.\n\n * Full-stack development and operations experience with web apps on Node.js.\n\n * Experience with analytics visualization libraries.\n\n * Experience with large-scale analytics and machine learning technologies including TensorFlow/Sonnet, Torch, Caffe, Spark, Hadoop, cuDNN, etc.\n\n * Conversant with relational, column, object, and graph database fundamentals and strong practical experience in any of those paradigms.\n\n * Deep understanding of how to build software agents and conversational workflows.\n\n * Experience with additional modern programming languages (Python, Scala, …)\n\n\n\nOur stack:\n\n * Java, C++, Python, JavaScript/ECMAscript + Node, Angular, RequireJS, Electron, Scala, etc.\n\n * A variety of open source and in-house frameworks for natural language processing and machine learning including artificial neural networks / deep learning.\n\n * Hybrid of AWS (EC2, S3, RDS, R53) + dedicated datacenter network, server and GPU/coprocessor infrastructure.\n\n * Cassandra, Aurora plus in-house streaming analytics pipeline (similar to Apache Flink) and indexing/query engine (similar to ElasticSearch).\n\n * In-house messaging frameworks for low-latency (sub-microsecond sensitivity) multicast and global-scale TCP (similarities to protobufs/FixFast/zeromq/itch).\n\n * Ansible, Git, Subversion, PagerDuty, Icinga, Grafana, Observium, LDAP, Jenkins, Maven, Purify, VisualVM, Wireshark, Eclipse, Intellij.\n\nThis position offers a great opportunity to work with advanced technologies, collaborate with a top-notch, global team, and disrupt a highly visible, multi-billion-dollar market. \n\n\n\nCompensation:\n\nWe understand how to attract and retain the best talent and offer a competitive mix of salary, benefits and equity. We also understand how important it is for you to feel challenged, to have opportunities to learn new things, to have the flexibility to balance your work and personal life, and to know that your work has impact in the real world.\n\nWe have team members on four continents and we're adept at making remote workers feel like part of the team. If you join our NYC main office be sure to bring your Nerf toys, your drones and your maker gear - we’re into that stuff, too.\n\n\nInterview Process:\n\nIf you can see yourself at Selerity, send your resume and/or online profile (e.g. LinkedIn) to [email protected]. We’ll arrange a short introductory phone call and if it sounds like there’s a match we'll arrange for you to meet the team for a full interview. \n\nThe interview process lasts several hours and is sometimes split across two days on site, or about two weeks with remote interviews. It is intended to be challenging - but the developers you meet and the topics you’ll be asked to explain (and code!) should give you a clear sense of what it would be like to work at Selerity. \n\nWe value different perspectives and have built a team that reflects that diversity while maintaining the highest standards of excellence. You can rest assured that we welcome talented engineers regardless of their age, gender, sexual orientation, religion, ethnicity or national origin.\n\n\n\nRecruiters: Please note that we are not currently accepting referrals from recruiters for this position. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Crypto, Finance, Java, Cloud, Python, SaaS, Engineer, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for JP Morgan Chase and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
DevSecOps Engineer\n \nGlasgow Technology Centre\n \nPurpose of role\n \n The purpose of this role is to be a DevSecOps engineer that can help to automate everything as we scale up the CIB Notifications solution.\n \nWhat sets this role apart?\n \nIn this role, the candidate would be a DevSecOps engineer embedded within the development team. We need someone that can come in and evangelise DevSecOps across the team, leading by example, creating best in class solutions to help us\n\n* Visualise our application through TICK/ELK stacks, identifying areas of improvement across our applications. \n* Drive our monitoring strategy as we move to a micro-service architecture \n* Develop tooling that supports the overall development and deployment of our applications. \n* Optimise our existing Continuous Deployment pipelines and introduce more CD pipelines to our entire application suite. \n* Ensuring we meet all cyber/security constraints and are able to automatically identify risks.\n\n\n\n\n\n \nWe are in the process of migrating application to our internal private cloud and expanding the capabilities of the application to scale horizontally as we grow the user base. You will be exposed to cloud development and deployments, containerisation using Docker. We are also building full stack javascript solutions from the ground up and have continuous delivery setup meaning your changes can be in production within minutes. Were able to do this as we have a team that has a culture centred on quality and best practice. We are autonomous and empowered to make changes to process to make sure were building the right software in the right way\n \nKey responsibilities \n \n\n* Work with the team to develop the CIB Notifications solution \n* Design and implementation of ELK/TICK stack data analytics solution \n* Integration into operational tools such as Slack, Hipchat, Mattermost, Rocketchat \n* Familiar with NoSQL database technologies, DynamoDB, Cassandra, InfluxDB \n* Familiar with graphical data analytics tools, Grafana, Kibana, Graylog\n\n\n\n\n\n \nYou need to have development skills Node.js, javascript, Typescript and server side development/scripting technologies, Python 2.7/3+, BASH, PowerShell.\n\n* Develop new user-facing features \n* Build reusable code and libraries for future use \n* Ensure the technical feasibility of UI/UX designs, by collaborating with the UI/UX team \n* Optimize applications for maximum speed and scalability \n* Collaborate with other team members and stakeholders \n* Strive for continuous improvement through active participation in team, J.P. Morgan community and site-wide activities\n\n\n\n\n\n\n\nTo be successful in this role you will be able to display the following: \n\nRequired (experience with)\n\n* Tool Skills (familiar with) \n\n* Setup and configuration of ELK or TICK stacks \n* Operational integration tools e.g. Slack, Hipchat, Mattermost \n* NoSQL database technologies e.g. DynamoDB (AWS), Reddis, Cassandra, InfluxDB \n* Graphical data analytics tools e.g. Grafana, Kibana, Graylog \n* Infrastructure monitoring tools e.g. Icinga2, Zabbix, SolarWinds\n\n\n\n\n\n\n \n\n* Familiar with administrating a Linux system at least a junior level.\n\n \n\n* Development Skills \n\n* Modern web development technologies \n\n* Node.js, React, Angular, Typescript\n\n* Server side development/scripting technologies \n\n* Python (2.7 or 3+) \n* BASH, PowerShell\n\n\n\n\n\n\n Personality traits\n\n* Be proactive, pragmatic, independent and resourceful in nature \n* Be passionate about web technology with a keen interest in working on latest technology offerings \n* Be able to present evidence of self motivation and passion for web technology \n* Be familiar (or have developed opinions) on how to structure large scale applications/projects and the necessary processes, team structures and technical approaches \n* Have considered opinions on how to work within such a team but possess an open mind to direction when given \n* Be community minded in their approach to work - and be active in consuming and providing information, teaching and help across all team units\n\n\n\n\n\n\n \n\nRequired (experience with)\n\n* Tool Skills (familiar with) \n\n* Setup and configuration of ELK or TICK stacks \n* Operational integration tools e.g. Slack, Hipchat, Mattermost \n* NoSQL database technologies e.g. DynamoDB (AWS), Reddis, Cassandra, InfluxDB \n* Graphical data analytics tools e.g. Grafana, Kibana, Graylog \n* Infrastructure monitoring tools e.g. Icinga2, Zabbix, SolarWinds\n\n\n\n\n\n\n \n\n* Familiar with administrating a Linux system at least a junior level.\n\n \n\n* Development Skills \n\n* Modern web development technologies \n\n* Node.js, React, Angular, Typescript\n\n* Server side development/scripting technologies \n\n* Python (2.7 or 3+) \n* BASH, PowerShell\n\n\n\n\n\n\n \nOur Corporate & Investment Bank relies on innovators like you to build and maintain the technology that helps us safely service the worlds important corporations, governments and institutions. Youll develop solutions for a bank entrusted with holding $18 trillion of assets and $393 billion in deposits. CIB provides strategic advice, raises capital, manages risk, and extends liquidity in markets spanning over 100 countries around the world.\nWhen you work at JPMorgan Chase & Co., youre not just working at a global financial institution. Youre an integral part of one of the worlds biggest tech companies. In 14 technology hubs worldwide, our team of 40,000+ technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $9.5B+ annual investment in technology enables us to hire people to create innovative solutions that will not only transform the financial services industry, but also change the world. \n\nAt JPMorgan Chase & Co. we value the unique skills of every employee, and were building a technology organization that thrives on diversity. We encourage professional growth and career development, and offer competitive benefits and compensation. If youre looking to build your career as part of a global technology team tackling big challenges that impact the lives of people and companies all around the world, we want to meet you. \n\n@2017 JPMorgan Chase & Co. JPMorgan Chase is an equal opportunity and affirmative action employer Disability/Veteran \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Engineer, Teaching, JavaScript, Cloud, NoSQL, Python, Mobile, Junior and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Spinn3r and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 3 years ago
\nCompany\n\nSpinn3r is a social media and analytics company looking for a talented Java “big data” engineer. \n\nAs a mature, ten (10) year old company, Spinn3r provides high-quality news, blogs and social media data for analytics, search, and social media monitoring companies. We’ve just recently completed a large business pivot, and we’re in the process of shipping new products so it's an exciting time to come on board!\n\nIdeal Candidate\n\nWe're looking for someone with a passion for technology, big data, and the analysis of vast amounts of content; someone with experience aggregating and delivering data derived from web content, and someone comfortable with a generalist and devops role. We require that you have a knowledge of standard system administration tasks, and have a firm understanding modern cluster architecture. \n\nWe’re a San Francisco company, and ideally there should be least a 4 hour overlap with the Pacific Standard Time Zone (PST / UTC-8). If you don't have a natural time overlap with UTC-8 you should be willing to work an alternative schedule to be able to communicate easily with the rest of the team. \n\nCulturally, we operate as a “remote” company and require that you’re generally available for communication and are self-motivated and remain productive.\n\nWe are open to either a part-time or full-time independent contractor role.\n\nResponsibilities\n\n\n* Understanding our crawler infrastructure;\n\n* Ensuring top quality metadata for our customers. There's a significant batch job component to analyze the output to ensure top quality data;\n\n\n\n\n\n* Making sure our infrastructure is fast, reliable, fault tolerant, etc. At times this may involve diving into the source of tools like ActiveMQ to understand how the internals work. We contribute to Open Source development to give back to the community; and\n\n\n\n\n\n* Building out new products and technology that will directly interface with customers. This includes cool features like full text search, analytics, etc. It's extremely rewarding to build something from ground up and push it to customers directly. \n\n\n\n\nArchitecture\n\nOur infrastructure consists of Java on Linux (Debian/Ubuntu) with the stack running on ActiveMQ, Zookeeper, and Jetty. We use Ansible to manage our boxes. We have a full-text search engine based on Elasticsearch which also backs our Firehose API.\n\nHere's all the cool products that you get to work with:\n\n\n* Large Linux / Ubuntu cluster running with the OS versioned using both Ansible and our own Debian packages for software distribution;\n\n* Large amounts of data indexed from the web and social media. We index from 5-20TB of data per month and want to expand to 100TB of data per month; and \n\n* SOLR / Elasticsearch migration / install. We’re experimenting with bringing this up now so it would be valuable to get your feedback.\n\n\n\n\nTechnical Skills\n\nWe're looking for someone with a number of the following requirements:\n\n\n* Experience in modern Java development and associated tools: Maven, IntelliJ IDEA, Guice (dependency injection);\n\n* A passion for testing, continuous integration, and continuous delivery;\n\n\n\n\n\n* ActiveMQ. Powers our queue server for scheduling crawl work;\n\n\n\n\n\n* A general understanding and passion for distributed systems;\n\n* Ansible or equivalent experience with configuration management; \n\n* Standard web API use and design. (HTTP, JSON, XML, HTML, etc.); and\n\n* Linux, Linux, Linux. We like Linux!\n\n\n\n\n\nCultural Fit\n\nWe’re a lean startup and very driven by our interaction with customers, as well as their happiness and satisfaction. Our philosophy is that you shouldn’t be afraid to throw away a week's worth of work if our customers aren’t interested in moving in that direction.\n\nWe hold the position that our customers are our responsibility and we try to listen to them intently and consistently:\n\n\n* Proficiency in English is a requirement. Since you will have colleagues in various countries with various primary language skills we all need to use English as our common company language. You must also be able to work with email, draft proposals, etc. Internally we work as a large distributed Open Source project and use tools like email, slack, Google Hangouts, and Skype; \n\n* Familiarity working with a remote team and ability (and desire) to work for a virtual company. Should have a home workstation, and fast Internet access, etc.;\n\n* Must be able to manage your own time and your own projects. Self-motivated employees will fit in well with the rest of the team; and\n\n* It goes without saying; but being friendly and a team player is very important.\n\n\n\n\nCompensation\n\n\n* Salary based on experience;\n\n* We're a competitive, great company to work for; and\n\n* We offer the ability to work remotely, allowing for a balanced live-work situation.\n\n\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Java, Engineer, DevOps, English, Elasticsearch, API and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Spinn3r and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 3 years ago
\nSpinn3r is a social media and analytics company looking for a talented Java big data Engineer. We're primarily interested in someone with experience delivering high quality and accurate data derived from web content.\n\nSpinn3r provides high quality weblog and social media data for analytics, search, and social media monitoring companies. We've been in business for over 7 years now and just recently completed a large business pivot and a relaunch.\n\nWe’re in the process of shipping new products so it's an exciting time to jump on board!\n\nRemote Work\n\nThis is a remote position. The main team operates from UTC-08:00 in San Francisco, CA in the west coast of the United States. You must be willing to work with at least 4 hours overlap.\n\nThis ideally works for anyone in central Europe and South America as it's closer to our main timezone.\n\nAdditionally you must be highly motivated and able to work independently.\n\nIdeal Candidate\n\nWe're interested in someone comfortable with a generalist and devops role. You should be knowledgeable of standard system administration tasks and have a firm understanding of the role of load balancers and cluster architecture. It's 100x harder to write code if you don't know how the underlying operating system works.\n\nWe're looking for someone with a legitimate passion for technology, big data, and analyzing vast amounts of content.\n\nWe are also looking for people outside of the U.S. and Canada to maximize our time zone distribution. Ideally there should be least a 4 hour overlap with the Pacific Standard Time Zone (PST / UTC-8). We're based out of San Francisco but are migrating to the international level. If you don't have a natural time overlap with UTC-8 you should be willing to work evenings to be able to communicate easily with the rest of the team.\n\nCulturally, we’re a remote company and want to embrace it as a way to reward our employees. We are fine with you working in remote locations as long as you’re generally available for communication and are productive.\n\nWe want someone to come in full time as a contractor role. I suspect we will need about 40 hours from you per week. \n\nJob Responsibilities:\n\n\n* \n\nUnderstanding our crawler infrastructure and ensuring top quality metadata for our customers. There's a significant batch job component to analyze the output from the crawl to ensure top quality data.\n\n\n\n\n\n\n* \n\nMaking sure our infrastructure is fast, reliable, fault tolerant, etc. At times this may involve diving into the source of tools like ActiveMQ, Cassandra and understand how the internals work. We contribute a LOT to Open Source development if our changes need to be given back to the community.\n\n\n\n\n\n\n* \n\nBuilding out new products and technology that will directly interface with customers. This includes cool features like full text search, analytics, etc. It's extremely rewarding to build something from ground up and push it to customers directly. \n\n\n\n\n\nArchitecture:\n\nOur infrastructure consists of Java on Linux (Debian/Ubuntu) with the stack running on ActiveMQ, Cassandra, Zookeeper, and Jetty. We use Ansible to manage our boxes. We have a full-text search engine based on Elasticsearch and store our firehose API data within Cassandra.\n\nWe have a totally new stack and infrastructure at this point. We recently did a full-stack rewrite and moved all the old code to our new infrastructure. This means we have very little legacy cruft to deal with.\n\nHere's all the cool stuff you get to play with:\n\n\n* \n\nLarge Linux / Ubuntu cluster running with the OS versioned using both Ansible and our own debian packages for software distribution.\n\n\n\n\n\n\n* \n\nMassive amount of data indexed from the web and social media. We index from 5-20TB of data per month and want to expand to 100TB of data per month.\n\n\n\n\n\n\n* \n\nLarge Cassandra install on SSD. \n\n\n\n\n\n\n* \n\nSOLR / Elastic Search migration / install. We’re experimenting with bringing this up now so it would be valuable to get your feedback.\n\n\n\n\n\nTechnical Skills:\n\nHere's where you shine! we're looking for someone with a number of the following requirements:\n\n\n* \n\nLinux. Linux. Linux. Did I say Linux? We like Linux.\n\n\n\n\n\n\n* \n\nExperience in modern Java development and associated tools. Maven, IntelliJ IDEA, Guice (dependency injection)\n\n\n* A passion for testing, continuous integration, and continuous delivery.\n\n\n\n\n\n* \n\nCassandra. Stores content indexed by our crawler.\n\n\n\n\n\n\n* ActiveMQ. Powers our queue server for scheduling crawl work.\n\n\n\n\n\n* \n\nA general understanding and passion for distributed systems.\n\n\n* \n\nAnsible or equivalent experience with configuration management.\n\n\n* Standard web API use and design. (HTTP, JSON, XML, HTML, etc).\n\n\n\n\nCultural Fit:\n\nWe’re a lean startup and very driven by our interaction with customers, as well as their happiness and satisfaction. Our philosophy is that you shouldn’t be afraid to throw away a week's worth of work if our customers aren’t interested in moving in that direction.\n\nWe hold the position that our customers are 1000x smarter than we are and we try to listen to them intently, and consistently.\n\n\n* \n\nProficiency in English is a requirement. Since you will have colleagues in various countries with various primary language skills we all need to use English as our common company language. You must also be able to work with email, draft proposals, etc. Internally we work as a large distributed Open Source project and use tools like email, slack, Google Hangouts, and Skype.\n\n\n* \n\nFamiliarity working with a remote team and ability (and desire) to work for a virtual company. Should have a home workstation, fast Internet access, etc.\n\n\n* \n\nMust be able to mange your own time and your own projects. Self-motivated employees will fit in well with the rest of the team.\n\n\n* \n\nIt goes without saying but being friendly and a team player is very important.\n\n\n\n\n\nCompensation:\n\n\n* Salary based on experience. We're willing to be competitive and a great company to work for.\n\n\n\n\n\n* Ability to work remotely at home. Live work balance is a must.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Java, Engineer, DevOps, English, Elasticsearch, API, Linux and Cassandra jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.