\nRestaurant365 is a SaaS company disrupting the restaurant industry! Our cloud-based platform provides a unique, centralized solution for accounting and back-office operations for restaurants. Restaurant365โs culture is focused on empowering team members to produce top-notch results while elevating their skills. Weโre constantly evolving and improving to make sure we are and always will be โBest in Classโ ... and we want that for you too!\n\n\nRestaurant365 is looking for an experienced Data Engineer to join our data warehouse team thatenables the flow of information and analytics across the company. The Data Engineer will participate in the engineering of our enterprise data lake, data warehouse, and analytic solutions. This is a key role on a highly visible team that will partner across the organization with business and technical stakeholders to create the objects and data pipelines used for insights, analysis, executive reporting, and machine learning. You will have the exciting opportunity to shape and grow with a high performingteam and the modern data foundation that enables the data-driven culture to fuel the companyโs growth. \n\n\n\nHow you'll add value: \n* Participate in the overall architecture, engineering, and operations of a modern data warehouse and analytics platforms. \n* Design and develop the objects in the Data Lake and EDW that serve as core building blocks for the semantic layer and datasets used for reporting and analytics across the enterprise. \n* Develop data pipelines, transformations (ETL/ELT), orchestration, and job controls using repeatable software development processes, quality assurance, release management, and monitoring capabilities. \n* Partner with internal business and technology stakeholders to understand their needs and then design, build and monitor pipelines that meet the companyโs growing business needs. \n* Look for opportunities for continuous improvements that automate workflows, reduce manual processes, reduce operational costs, uphold SLAs, and ensure scalability. \n* Use an automated observability framework for ensuring the reliability of data quality, data integrity, and master data management. \n* Partner closely with peers in Product, Engineering, Enterprise Technology, and InfoSec teams on the shared enterprise needs of a data lake, data warehouse, semantic layer, transformation tools, BI tools, and machine learning. \n* Partner closely with peers in Business Intelligence, Data Science, and SMEs in partnering business units o translate analytics and business requirements into SQL and data structures \n* Responsible for ensuring platforms, products, and services are delivered with operational excellence and rigorous adherence to ITSM process and InfoSec policies. \n* Adopt and follow sound Agile practices for the delivery of data engineering and analytics solutions. \n* Create documentation for reference, process, data products, and data infrastructure \n* Embrace ambiguity and other duties as assigned. \n\n\n\nWhat you'll need to be successful in this role: \n* 3-5 years of engineering experience in enterprise data warehousing, data engineering, business intelligence, and delivering analytics solutions \n* 1-2 years of SaaS industry experience required \n* Deep understanding of current technologies and design patterns for data warehousing, data pipelines, data modeling, analytics, visualization, and machine learning (e.g. Kimball methodology) \n* Solid understanding of modern distributed data architectures, data pipelines, API pub/sub services \n* Experience engineering for SLA-driven data operations with responsibility for uptime, delivery, consistency, scalability, and continuous improvement of data infrastructure \n* Ability to understand and translate business requirements into data/analytic solutions \n* Extensive experience with Agile development methodologies \n* Prior experience with at least one: Snowflake, Big Query, Synapse, Data bricks, or Redshift \n* Highly proficient in both SQL and Python for data manipulation and assembly of Airflow DAGโs. \n* Experience with cloud administration and DevOps best practices on AWS and GCP and/or general cloud architecture best practices, with accountability cloud cost management \n* Strong interpersonal, leadership and communication skills, with the ability to relate technical solutions to business terminology and goals \n* Ability to work independently in a remote culture and across many time zones and outsourced partners, likely CT or ET \n\n\n\nR365 Team Member Benefits & Compensation\n* This position has a salary range of $94K-$130K. The above range represents the expected salary range for this position. The actual salary may vary based upon several factors, including, but not limited to, relevant skills/experience, time in the role, business line, and geographic location. Restaurant365 focuses on equitable pay for our team and aims for transparency with our pay practices. \n* Comprehensive medical benefits, 100% paid for employee\n* 401k + matching\n* Equity Option Grant\n* Unlimited PTO + Company holidays\n* Wellness initiatives\n\n\n#BI-Remote\n\n\n$90,000 - $130,000 a year\n\nR365 is an Equal Opportunity Employer and we encourage all forward-thinkers who embrace change and possess a positive attitude to apply. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, InfoSec, Python, Accounting, DevOps, Cloud, API and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nThe PagerDuty Expert Services team is focused on enabling our customers to most effectively leverage our platform to achieve their business goals. We partner with our key customers to provide large scale onboarding; custom integrations, service modeling, and provision users, teams, services, schedules, and escalation policies.\n\n \n\nAs the company introduces a new approach to services delivery in a rapidly growing startup, this role will be instrumental in developing the process and technologies to deliver amazing customer experiences. You will help establish methodologies and repeatable processes to deliver successful implementations, every time. \n\n \n\nAbout You\n\nYouโve got technical chops. You are a technologist first. You demonstrate a deep knowledge of IT monitoring tools or within DevOps, SRE or IT Operations. You run the implementation process from design to delivery. You partner with customers to help design and build integrations to provide awesome implementations..\n\n \n\nYou are a problem solver. You identify potential roadblocks and provide thoughtful solutions. You are excellent at multitasking, are self-driven, and can work both independently and with a cross-functional team. You come up to speed quickly, love to learn, have a strong working style and impeccable attention to detail. You are comfortable running multiple simultaneous customer engagements and able to manage multiple threads within those engagements\n\n \n\nYou are an excellent and compelling communicator. You can break down complex technical concepts and explain them clearly to partners from business and technical backgrounds, from a DevOps engineer up to a C Level Executive. You have experience implementing technology solutions in the SaaS world and can articulate the solution to all levels in the customer organization\n\n \n\nYou are an extraordinary partner โ to sales, to product, to your team, to your customers. Depending on the situation, you play the part of project manager, architect, consultant, technical guru, product expert, leader, evangelist, and teacher, with a relentless commitment to outstanding customer service\n\n \n\nIdeal Qualifications\n\n\n* \n\n5+ years of hands-on technical background with a primary emphasis on IT Operations / Professional Services delivery\n\n\n* \n\nDemonstrated Python and Javascript experience, especially within an AWS Lambda and stand-alone automation, scripting and tooling context\n\n\n* \n\nDemonstrated knowledge and ability to interact with common SaaS and traditional software APIs (REST, SOAP, WS), webhooks, etc. as part of scripting and tooling development, integration development, and ETL like activities.\n\n\n* \n\nKnowledge of infrastructure as code and DevOps SRE toolchains (GitHub, Terraform, Chef, Artifactory, JFrog, Nomad, Consul, Vault)\n\n\n* \n\nAbility to do advanced scripting (Python, Javascript, Go, Ruby, Perl) and fundamental knowledge of Linux. \n\n\n* \n\nHands-on technical background using AWS (EC2, Lambda, S3, RDS, API Gateway, DynamoDB, IAM)\n\n\n* \n\nDeep technical knowledge with ITSM tools like ServiceNow, Jira, Remedy (ServiceNow Admin, ServiceNow Scripting, ServiceNow GScript/Rhino, Studio) \n\n\n* \n\nUnderstanding of monitoring systems (DataDog, Dynatrace, Nagios, New Relic, Splunk, Zabbix)\n\n\n* \n\nYou know and understand our space (or youโre already a fan of our product!).\n\n\n* \n\nBe prepared to give us a demo and show us what you've got!\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Finance, Design, Cloud, Sales, Digital Nomad, SaaS, Ads, Mobile, Marketing, JavaScript, PHP, API, Senior, Engineer, Analyst, Travel, Payroll, Ruby, Legal, Accounting, Testing, Microsoft, React, GraphQL, Scrum, Java, jQuery, Video, DevOps, Biotech, Python and C jobs that are similar:\n\n
$70,000 — $117,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSantiago, Santiago Metropolitan Region, Chile
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Ethyca and want to re-open this job? Use the edit link in the email when you posted the job!
**Ethyca** (https://ethyca.com) is a high-growth Series A startup building the trust infrastructure of the internet. Ethycaโs platform powers data privacy for businesses facing regulations like GDPR and CCPA all over the world. We do this by building world-class tools, developer-friendly APIs, and secure, deployed applications to make it easy for our customers to integrate all their systems together to provide their users with powerful rights over their personal data.\n\nAs Ethycaโs first technical writer, you will be focused on Ethycaโs developer adopters onboarding to our fleet of open-source privacy products. The success of open-source products relies on great documentation and thus your technical expertise will act as the Welcome mat for all that land on our doorstep. Your writing style speaks to stakeholders in all enterprise verticals: engineers of every variety, lawyers, product managers, and business operations resources. Your super power is in translating complex technical concepts and value proposition to a clearly organized, concisely written, durable body of knowledge for any audience to understand. In this role, you will lay the foundations for a strong, long-term relationship between Ethyca and our growing developer community.\n\n## What you will do\nDocument deployment and product guides for Ethycaโs open-source privacy tools, including Dockerized deployed agents, API services, and more\nDocument standards, guidelines, and best practices for engineers and peripheral business units\nOrganize documentation and make it easily discoverable\nWork closely with Ethycaโs product team to stay current on product releases and roadmap features\nWork with Ethyca's marketing team to write and consult on technical-leaning marketing content\nMaintain a deep understanding of Ethycaโs open-source products, the roadmap and the growing privacy landscape\n\n\n## Requirements\n* A self-starting **software engineer at heart**. With 3-5 years of engineering or technical writing experience working with SaaS software products, you know just enough SQL and noSQL, have implemented API and git documentation best practices, have gotten tangled in containerized deployments once or twice, and are familiar with cloud service management products like AWS, GCP, Azure. And we ๐ markdown, and hope you do too!\n* **Empathetic to developers** and their needs. Youโve written developer-facing documentation for 1-3 years and are able to quickly adapt to community and internal feedback. Software engineers donโt yet know that they should expect a lot from a data privacy product baked into their SDLC process. We are here to educate them and provide a best-in-class experience using our developer-focused open-source tools.\n* An **independent yet collaborative contributor**, your experience managing documentation projects independently in a complex, highly technical space means youโve worked with net-sec-ops, devops teams, data/base engineering teams, marketing, and product engineering teams and you know how and when to engage each.\n* A relentless **troubleshooter**. No problem too tough, no issue too elusive; we are looking for someone who will tackle the sometimes nebulous documentation from engineers and work through it, rather than around it. \n* An **egoless, yet expert** liaison between Ethyca open-source engineering teams and the developer community. Great writing is everyoneโs responsibility at Ethyca, but sometimes we need a little help (okay, a lot of help!) from experts like you! You are an exceptional writer, nay, wordsmith, that understands that effective, concise writing leads to better comprehension of complex systems.\n\n## Benefits\n* Competitive cash and equity compensation\n* 100% medical and dental insurance coverage for you and your dependent(s)\n* Remote-friendly office hours and vacation policy\n* Sponsored company lunches and events\n* Parental leave and 401K plan\n\n\nWe are an equal opportunity employer and are committed to diversity, equity, and inclusion. We do not discriminate on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, or any other protected characteristics.\n\nEthyca is a distributed team with headquarters in NYC and remote workers across the US. When itโs safe to meet again, youโll have the opportunity to travel to NYC a few times a year for company events. We are currently unable to sponsor visas so require that you are authorized to work in the USA.\n\nWeโre a data privacy company building a missing piece of the Internetโs infrastructure: the trust layer that empowers users and businesses to manage data respectfully. Every day, weโre solving challenges for customers and thinking about the future of human rights as society increasingly moves online. If this sounds intriguing and youโre excited to shape that future with us, weโd love to talk to you! \n\nPlease mention the words **PRICE DIFFER SLICE** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$80,000 — $100,000/year\n
\n\n#Benefits\n
๐ฆท Dental insurance\n\n๐ Distributed team\n\n
\n\n#Location\nUnited States
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Basil Systems and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nBasil Systems is a healthcare-focused start-up delivering a SaaS application that leverages product, market and regulatory data to support all types of players in the medical device and pharmaceutical industries. Our goal is to help our customers deliver safe and innovative healthcare products to patients and consumers, while reducing the time to get to market. After 14 months of stealth development, we soft launched in late December 2020 to a very positive market reception and customers.\n\nIn short, Basil is a funded, revenue generating, and growth-oriented company – and we are actively seeking talented engineers to build and deliver an extensive roadmap of amazing features. We need a Sr. backend engineer excited by multifaceted data challenges, and with the ability to design, build, and deploy stable and scalable production software.\n\nSr. Backend Engineer\n\nWe are looking for a self-starter; you’ll spend your time building reliable backend services and solving complex data processing problems. We would like you to have:\n\n\n* 5+ years professional experience as backend or data engineer\n\n* Very strong skills experience with modern Python (3.8+), MongoDB (particularly 4.2+), and MySQL or MariaDB\n\n* Significant experience with AWS services, operations and architecture, especially with respect to data heavy applications\n\n* Spent time building and managing ETL pipelines\n\n* Familiarity with recent versions of Elasticsearch\n\n* Strong DevOps experience, with a commitment to engineering best practices \n\n\n\n\nNot all of these are required, but ideally you have experience with:\n\n\n* Typescript / ES (Node)\n\n* Golang\n\n* Docker & Kubernetes\n\n* Terraform\n\n* Solid understanding of modern security practices\n\n* CI/CD\n\n\n\n\nSome nice to haves\n\n\n* You have worked with, reconciled and normalized disparate data sets\n\n* Natural language processing (NLP) experience\n\n* Strong data modeling skills, and an eye for what the data means in a business and product context\n\n* Interest in machine learning, and experience building ML-driven or algorithmic data products\n\n* Some exposure to product analytics data pipelines and basic understanding of AB testing\n\n\n\n\nFinally\n\nWe are a distributed team headquartered in Boston, with an office in Nashville, TN. However, our culture allows flexibility as to when, where and how you work best – and actively employ and support remote engineers.\n\nBenefits include:\n\n\n* Competitive salary\n\n* Health and vision\n\n* An attractive equity package \n\n\n\n\nBasil supports and encourages building a work environment that is diverse, inclusive, and safe for everyone -- and we welcome all applicants. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Backend, DevOps, Python, SaaS and Medical jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for YouGov and want to re-open this job? Use the edit link in the email when you posted the job!
\nCrunch.i, part of the YouGov PLC is a market-defining company in the analytics SaaS marketplace. We’re a company on the rise. We’ve built a revolutionary platform that transforms our customers’ ability to drive insight from market research and survey data. We offer a complete survey data analysis platform that allows market researchers, analysts, and marketers to collaborate in a secure, cloud-based environment, using a simple, intuitive drag-and-drop interface to prepare, analyze, visualize and deliver survey data and analysis. Quite simply, Crunch provides the quickest and easiest way for anyone, from CMO to PhD, with zero training, to analyze survey data. Users create tables, charts, graphs and maps. They filter, and slice-and-dice survey data directly in their browser.\n\nOur start-up culture is casual, respectful of each other’s varied backgrounds and lives, and high-energy because of our shared dedication to our product and our mission. We are loyal to each other and our company. We value work/life balance, efficiency, simplicity, and fantastic customer service! Crunch has no offices and fully embraces a 100% remote culture. We have 40 employees spread across 5 continents. Remote work at Crunch is flexible and largely independent, yet highly cooperative.\n\nWe are hiring a DevOps Lead to help expand our platform and operations excellence. We are inviting you to join our small, fully remote team of developers and operators helping make our platform faster, more secure, and more reliable. You will be self-motivated and disciplined in order to work with our fully distributed team.\n\nWe are looking for someone who is a quick study, who is eager to learn and grow with us, and who has experience in DevOps and Agile cultures. At Crunch, we believe in learning together: we recognize that we don’t have all the answers, and we try to ask each other the right questions. As Crunch employees are completely distributed, it’s crucial that you can work well independently, and keep yourself motivated and focused.\n\nOur Stack:\n\nWe currently run our in-house production Python code against Redis, MongoDB, and ElasticSearch services. We proxy API requests through NGINX, load balance with ELBs, and deploy our React web application to AWS CloudFront CDN. Our current CI/CD process is built around GitHub, Jenkins, BlueOcean including unit, integration, and end to end tests and automated system deployments. We deploy to auto-scaling Groups using Ansible and Cloud-Init.\n\nIn the future, all or part of our platform may be deployed via DroneCI, Kubernetes, nginx ingress, Helm, and Spinnaker.\n\nWhat you'll do:\n\nAs a Leader:\n\n\n* Manage and lead a team of Cloud Operations Engineers who are tasked with ensuring our uptime guarantees to our customer base.\n\n* Scale the worldwide Cloud Operations Engineering team with the strategic implementation of new processes and tools.\n\n* Hire and ramp exceptional Cloud Operations Engineers.\n\n* Assist in scoping, designing and deploying systems that reduce Mean Time to Resolve for customer incidents.\n\n* Inform executive leadership and escalation management personnel of major outages\n\n* Compile and report KPIs across the full company.\n\n* Work with Sales Engineers to complete pre-sales questionnaires, and to gather customer use metrics.\n\n* Prioritize projects competing for human and computational resources to achieve organizational goals.\n\n\n\n\nAs an Engineer:\n\n\n* Monitor and detect emerging customer-facing incidents on the Crunch platform; assist in their proactive resolution, and work to prevent them from occurring.\n\n* Coordinate and participate in a weekly on-call rotation, where you will handle short term customer incidents (from direct surveillance or through alerts via our Technical Services Engineers).\n\n* Diagnose live incidents, differentiate between platform issues versus usage issues across the entire stack; hardware, software, application and network within physical datacenter and cloud-based environments, and take the first steps towards resolution.\n\n* Automate routine monitoring and troubleshooting tasks.\n\n* Cooperate with our product management and engineering organizations by identifying areas for improvement in the management of applications powering the Crunch infrastructure.\n\n* Provide consistent, high-quality feedback and recommendations to our product managers and development teams regarding product defects or recurring performance issues.\n\n* Be the owner of our platform. This includes everything from our cloud provider implementation to how we build, deploy and instrument our systems.\n\n* Drive improvements and advancements to the platform in areas such as container orchestration, service mesh, request/retry strategies.\n\n* Build frameworks and tools to empower safe, developer-led changes, automate the manual steps and provide insight into our complex system.\n\n* Work directly with software engineering and infrastructure leadership to enhance the performance, scalability and observability of resources of multiple applications and ensure that production hand off requirements are met and escalate issues.\n\n* Embed into SRE projects to stay close to the operational workflows and issues.\n\n* Evangelize the adoption of best practices in relation to performance and reliability across the organization.\n\n* Provide a solid operational foundation for building and maintaining successful SRE teams and processes.\n\n* Maintain project and operational workload statistics.\n\n* Promote a healthy and functional work environment.\n\n* Work with Security experts to do periodic penetration testing, and drive resolution for any issues discovered.\n\n* Liaise with IT and Security Team Leads to successfully complete cross-team projects, filling in for these Leads when necessary.\n\n* Administer a large portfolio of SaaS tools used throughout the company.\n\n\n\n\nQualifications:\n\n\n* Team Lead experience of an on-call DevOps, SRE, or Cloud Operations team (at least 2 years).\n\n* Experience recruiting, mentoring, and promoting high performing team members.\n\n* Experience being an on-call DevOps, SRE, or Cloud Operations engineer (at least 2 years).\n\n* Proven track record of designing, building, sizing, optimizing, and maintaining cloud infrastructure.\n\n* Proven experience developing software, CI/CD pipelines, automation, and managing production infrastructure in AWS.\n\n* Proven track record of designing, implementing, and maintaining full CI/CD pipelines in a cloud environment (Jenkins experience preferred).\n\n* Experience with containers and container orchestration tools (Docker, Kubernetes, Helm, traefik, Nginx ingress and Spinnaker experience preferred).\n\n* Expertise with Linux system administration (5 yrs) and networking technologies including IPv6.\n\n* Knowledgeable about a wide range of web and internet technologies.\n\n* Knowledge of NoSQL database operations and concepts.\n\n* Experience in monitoring, system performance data collection and analysis, and reporting.\n\n* Capability to write small programs/scripts to solve both short-term systems problems and to automate repetitive workflows (Python and Bash preferred).\n\n* Exceptional English communication and troubleshooting skills.\n\n* A keen interest in learning new things.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Executive, React, English, Elasticsearch, Cloud, NoSQL, Python, API, Sales, SaaS, Engineer, Nginx and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Security Scorecard - We are revolutionizing the cybersecurity industry and want to re-open this job? Use the edit link in the email when you posted the job!
\nOpportunity\n\nSecurityScorecard is hiring a DevOps Engineer to bridge the gap between our global development and operational teams who is motivated to help continue automating and scaling our infrastructure. The DevOps Engineer will be responsible for setting up and managing the operation of project development and test environments as well as the software configuration management processes for the entire application development lifecycle. Your role would be to ensure the optimal availability, latency, scalability, and performance of our product platforms. You would also be responsible for automating production operations, promptly notifying backend engineers of platform issues, and checking long term quality metrics.\n\nOur infrastructure is based on AWS with a mix of managed services like RDS, ElastiCache, and SQS, as well as hundreds of EC2 instances managed with Ansible and Terraform. We are actively using three AWS regions, and have equipment in several data centers across the world.\n\nRegions: North America (GMT-7.00) Mountain time - (GMT-4.00) Atlantic time\n\nResponsibilities\n\n\n* Training, mentoring, and lending expertise to coworkers with regards to operational and security best practises. \n\n* Reviewing and providing feedback on GitHub Pull Requests to team members AND development teams- a significant percentage of our Software Engineers have written Terraform.\n\n* Identifying opportunities for technical and process improvement and owning the implementation. \n\n* Championing the concepts of immutable containers, Infrastructure as Code, stateless applications, and software observability throughout the organization.\n\n* Systems performance tuning with a focus on high availability and scalability.\n\n* Building tools to ease the usability and automation of processes\n\n* Keeping products up and operating at full capacity\n\n* Assisting with migration processes as well as backup and replication mechanisms\n\n* Working on a large-scale distributed environment where you were focused on scalability/reliability/performance\n\n* Ensuring proper monitoring / alerting are configured\n\n* Investigating incidents and performance lapses\n\n\n\n\nCome help us with projects such as…\n\n\n* Extending our compute clusters to support low latency, on-demand job execution\n\n* Turning pets into cattle\n\n* Cross region replication of systems and corresponding data to support low latency access\n\n* Rolling out application performance monitoring to existing services, extending integrations where required\n\n* Migration from self hosted ELK to a SaaS stack\n\n* Continuous improvement of CI/CD processes making builds & deployments faster, safer, and more consistent\n\n* Extending a Global VPN WAN to a datacenter with IPSec+BGP\n\n\n\n\nRequirements\n\n\n* 3+ years of DevOps and/or Operations experience in a Linux based environment\n\n* 1+ years of production environment experience with Amazon Web Services (AWS)\n\n* 1+ years using SQL databases (MySQL, Oracle, Postgres)\n\n* Strong scripting abilities (bash/python)\n\n* Strong Experience with CI/CD processes (Jenkins, Ansible) and automated configuration tools (Puppet/Chef/Ansible)\n\n* Experience with container orchestration (AWS ECS, Kubernetes, Marathon/Mesos)\n\n* Ability to work as part of a highly collaborative team\n\n* Understanding of monitoring tools like DataDog\n\n* Strong written and verbal communication skills\n\n\n\n\nNice to Have\n\n\n* You knew exactly what was meant by "Turning pets into cattle"\n\n* Experience working with Kubernetes on bare-metal and/or the AWS Elastic Kubernetes Service.\n\n* Experience with RabbitMQ, MongoDB, or Apache Kafka.\n\n* Experience with Presto or Apache Spark.\n\n* Familiarity with computation orchestration tools such as HTCondor, Apache Airflow, or Argo.\n\n* Understanding of network concepts- OSI layers, firewalls, DNS, split horizon DNS, VPN, routing, BGP, etc.\n\n* A deep understanding of AWS IAM, and how it interacts with S3 buckets.\n\n* Experience with SAFe.\n\n* Strong programming skills in 2+ languages.\n\n\n\n\nTooling We Use\n\n\n* Data definition, format and interfaces\n\n\n\n* Definitions - Protobuf V3\n\n* Normalize from - JSON / XML / CSV\n\n* Normalize to - Protobuf / ORC\n\n* Interfaces - REST API(s) and object store buckets\n\n\n\n* Cloud Services - Amazon Web Services\n\n* Databases: Postgresql, PrestoDB\n\n* Cache: Redis, Varnish\n\n* Languages: Python / C++14 / Scala / Golang / Javascript / Ruby / Java\n\n* Job Orchestration - HTCondor / Apache Airflow / Rundeck\n\n* Analytics - Spark \n\n* Storage: NFS/EFS, AWS S3, HDFS\n\n* Computation - Docker Containers / VMs / Metal / EMR\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, InfoSec, Senior, Engineer, JavaScript, Amazon, Python, Scala, Ruby, SaaS, Golang, Apache, Linux and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Selerity and want to re-open this job? Use the edit link in the email when you posted the job!
\nSummary:\n\nWe are looking for a Senior DevOps (Site Reliability) Engineer to join Selerity’s team, scaling up an A.I. driven analytics and recommendation platform and integrating it into enterprise workflows. Highly competitive compensation plus significant opportunities for professional growth and career advancement.\n\nEmployment Type: Contract or Full-time\n\nLocation is flexible: We have offices in New York City and Oak Park, Illinois (Chicago suburb) but about half of our team currently works remotely from various parts of Europe, North America, and Asia. \n\n\nJob Description:\n\nWant to change how the world engages with chat, research, social media, news, and data?\n\nSelerity has dominated ultra-low-latency data science in finance for almost a decade. Now our real-time content analytics and contextual recommendation platform is gaining broader traction in enterprise and media applications. We're tackling big challenges in predictive analytics, conversational interfaces, and workflow automation and need your help!\n\nWe’re looking for an experienced DevOps (Site Reliability) Engineer to join a major initiative at a critical point in our company’s growth. The majority of Selerity’s applications are developed in Java and C++ on Linux but knowledge of other languages (especially Python and JavaScript), platforms and levels of the stack is very helpful.\n\n\n\nMust-haves:\n\n * Possess a rock-solid background in Computer Science (minimum BS in Comp Sci or related field) + at least 5 years (ideally 10+) of challenging work experience.\n\n * Implementation of DevOps / SRE processes at scale including continuous integration (preferred: Jenkins), automated testing, and platform monitoring (preferred: JMX, Icinga, Grafana, Graphite).\n\n * Demonstrated proficiency building and modifying Java and C++ applications in Linux environments (using Git, SVN). \n\n * Significant operations expertise with the Ansible (preferred), Chef, or Puppet deployment automation system in a Cloud environment.\n\n * Direct experience in the design, implementation, and maintenance of SaaS APIs that are minimal, efficient, scalable, and supportable throughout their lifecycle (OpenLDAP).\n\n * Solid track record of making effective design decisions balancing near-term and long-term objectives.\n\n * Know when to use commercial or open-source solutions, when to delegate to a teammate, and when to roll up your sleeves and code it yourself.\n\n * Work effectively in agile teams with remote members; get stuff done with minimal guidance and zero BS, help others, and know when to ask for help.\n\n * Clearly communicate complex technical and product issues to non-technical team members, managers, clients, etc. \n\n\n\nNice-to-haves:\n\n * Proficiency with Cisco, Juniper, and other major network hardware platforms, as well as ISO layer 1 and 2 protocols.\n\n * Experience with Internet routing protocols such as BGP.\n\n * Implementation of software defined networking or other non-traditional networking paradigms.\n\n * Proficiency with SSL, TLS, PGP, and other standard crypto protocols and systems.\n\n * Full-stack development and operations experience with web apps on Node.js.\n\n * Experience with analytics visualization libraries.\n\n * Experience with large-scale analytics and machine learning technologies including TensorFlow/Sonnet, Torch, Caffe, Spark, Hadoop, cuDNN, etc.\n\n * Conversant with relational, column, object, and graph database fundamentals and strong practical experience in any of those paradigms.\n\n * Deep understanding of how to build software agents and conversational workflows.\n\n * Experience with additional modern programming languages (Python, Scala, …)\n\n\n\nOur stack:\n\n * Java, C++, Python, JavaScript/ECMAscript + Node, Angular, RequireJS, Electron, Scala, etc.\n\n * A variety of open source and in-house frameworks for natural language processing and machine learning including artificial neural networks / deep learning.\n\n * Hybrid of AWS (EC2, S3, RDS, R53) + dedicated datacenter network, server and GPU/coprocessor infrastructure.\n\n * Cassandra, Aurora plus in-house streaming analytics pipeline (similar to Apache Flink) and indexing/query engine (similar to ElasticSearch).\n\n * In-house messaging frameworks for low-latency (sub-microsecond sensitivity) multicast and global-scale TCP (similarities to protobufs/FixFast/zeromq/itch).\n\n * Ansible, Git, Subversion, PagerDuty, Icinga, Grafana, Observium, LDAP, Jenkins, Maven, Purify, VisualVM, Wireshark, Eclipse, Intellij.\n\nThis position offers a great opportunity to work with advanced technologies, collaborate with a top-notch, global team, and disrupt a highly visible, multi-billion-dollar market. \n\n\n\nCompensation:\n\nWe understand how to attract and retain the best talent and offer a competitive mix of salary, benefits and equity. We also understand how important it is for you to feel challenged, to have opportunities to learn new things, to have the flexibility to balance your work and personal life, and to know that your work has impact in the real world.\n\nWe have team members on four continents and we're adept at making remote workers feel like part of the team. If you join our NYC main office be sure to bring your Nerf toys, your drones and your maker gear - we’re into that stuff, too.\n\n\nInterview Process:\n\nIf you can see yourself at Selerity, send your resume and/or online profile (e.g. LinkedIn) to [email protected]. We’ll arrange a short introductory phone call and if it sounds like there’s a match we'll arrange for you to meet the team for a full interview. \n\nThe interview process lasts several hours and is sometimes split across two days on site, or about two weeks with remote interviews. It is intended to be challenging - but the developers you meet and the topics you’ll be asked to explain (and code!) should give you a clear sense of what it would be like to work at Selerity. \n\nWe value different perspectives and have built a team that reflects that diversity while maintaining the highest standards of excellence. You can rest assured that we welcome talented engineers regardless of their age, gender, sexual orientation, religion, ethnicity or national origin.\n\n\n\nRecruiters: Please note that we are not currently accepting referrals from recruiters for this position. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Crypto, Finance, Java, Cloud, Python, SaaS, Engineer, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.