\nAirDNA began with a big dream in a balmy California garage in 2015. Since then, the technology startup has grown into the leading provider of data and business intelligence for the billion-dollar travel and vacation rental industryโwith offices in Denver and Barcelona. \n\n\nOur self-serve platform eliminates guesswork and equips Airbnb hosts with smart and competitive insights needed to succeed in the ever-evolving short-term rental landscape. \n\n\nWe also arm enterprise clients with customized reports and in-depth dashboards to ensure they can scale and invest strategically. These customers include hundreds of top financial institutions, real estate companies, vacation rental managers, and destination marketing organizations around the world. \n\n\nWe track the daily performance of over 10 million Airbnb and Vrbo properties across 120,000 global markets. We also collect data from over a million partner properties. This marriage of scraped and source data, enhanced by our proprietary algorithms, makes our solutions the most accurate and comprehensive in the world. \n\n\nWeโre firm believers that data isnโt the destination; itโs the starting point. The launchpad. The bedrock for any future-forward business.\n\n\nThe Role: \n Come join our Platform team and help drive our growth by designing, maintaining, and improving our platform and processes. The ideal person for this role is driven to design robust, scalable, secure infrastructure, cares about the details, and enjoys helping both individual engineers and their teams work more effectively.\n\n\n\nHere's what you'll get to do: \n* Build and maintain monitoring, logging, and telemetry solutions for proactive performance and reliability management (experience with Datadog, Prometheus, Grafana).\n* Evaluate and integrate new technologies to enhance platform capabilities, especially around containers, databases, and cloud-native architectures.\n* Ensure security, compliance, and cost optimization in all platform solutions, utilizing tools like IAM, GuardDuty, and AWS Security Hub.\n* Design, implement, and manage scalable infrastructure solutions using AWS services (EC2, S3, RDS, Lambda, CloudFront, etc.).\n* Manage, scale, and optimize multiple databases (PostgresQL + Druid) to ensure performance, availability, and redundancy.\n* Collaborate with development and operations teams to streamline release processes and integrate best practices for infrastructure as code (Terraform, CloudFormation).\n* Work closely with stakeholders to identify infrastructure needs and lead initiatives to scale the platform in alignment with business goals.\n* Drive continuous improvement in the platformโs architecture and processes, optimizing for performance, reliability, and operational efficiency.\n* Collaborate with cross-functional teams to align platform development with product goals and strategies.\n* Mentor and guide junior team members, providing technical leadership and driving best practices across the platform team.\n\n\n\nHere's what you'll need to be successful: \n* Strong familiarity with Amazon Web Services, multi-account experience preferred.\n* Expertise using Docker and Kubernetes.\n* Experience with developing and maintaining CI/CD pipelines to automate application deployment and infrastructure provisioning.\n* Able to diagnose and troubleshoot problems in a distributed microservice environment.\n* Solid understanding of TCP/IP networking.\n* Expertise with Linux (prefer Ubuntu, Alpine, and/or Amazon Linux).\n* Understanding of DevOps practices.\n* Demonstrated experience in managing or leading platform teams, with the ability to grow the team and develop talent within.\n\n\n\nHere's what would be nice to have:\n* Gitlab pipelines\n* ArgoCD\n* Linkerd, Istio or other service mesh\n* ELK stack or similar logging platforms\n* Ansible or other configuration management tools\n* Cloudformation or other IaC tools\n* JSON/YAML\n* OpenVPN\n* Apache Airflow\n* Databases (postgres and Druid preferred)\n* Cloudflare\n* Atlassian tools such as Jira, Confluence, StatusPage\n* Programming experience: shell scripting, Python, Golang preferred\n* Experience with performance optimization of distributed microservices\n\n\n\nHere's what you can expect from us: \n* Competitive cash compensation and benefits, the salary range for this position is $150,000 - $180,000 per year. \n\nColorado Salary Statement: The salary range displayed in specifically for those potential hired who will work or reside in the state of Colorado if selected for this role. Any offered salary is determined based on internal equity, internal salary ranges, market data/ranges, applicantโs skills and prior relevant experience, certain degrees and certifications. \nBenefits include:\n* Medical, dental, and vision packages to meet your needs\n* Unlimited vacation policy; take time when you need it \n* Eligible for Companyโs annual discretionary bonus program\n* 401K with employer match up to 4%\n* Continuing education stipend\n* 16 weeks of paid parental leave\n* New MacBooks for employees\n\n\nOffice Perks for Denver Based Employees:\n* Commuter/RTD benefit\n* Quarterly team outings\n* In-office lunch Tuesday - Thursday\n* We have a great office located just a few blocks away from Union Station in the heart of Denverโs historic LoDo neighborhoodโhigh ceilings, exposed brick, a fully stocked kitchen (snacks, espresso, etc.), and plenty of meeting rooms and brainstorming nooks\n* Pet-friendly!\n\n\n\n\n\n\nAirDNA seeks to attract the best-qualified candidates who support the mission, vision and values of the company and those who respect and promote excellence through diversity. We are committed to providing equal employment opportunities (EEO) to all employees and applicants without regard to race, color, creed, religion, sex, age, national origin, citizenship, sexual orientation, gender identity and expression, physical or mental disability, marital, familial or parental status, genetic information, military status, veteran status or any other legally protected classification. The company complies with all applicable state and local laws governing nondiscrimination in employment and prohibits unlawful harassment based on any of the aforementioned protected classes at every location in which the company operates. This applies to all terms, conditions and privileges of employment including but not limited to: hiring, assessments, probation, placement, benefits, promotion, demotion, termination, layoff, recall, transfer, leave of absence, compensation, training and development, social and recreational programs, education assistance and retirement. \n\n\nWe are committed to making our application process and workplace accessible for individuals with disabilities. Upon request, AirDNA will reasonably accommodate applicants so they can participate in the application process unless doing so would create an undue hardship to AirDNA or a threat to these individuals, others in the workplace or the company as a whole. To request accommodation, please email [email protected]. Please allow for 24 hours to process your request. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, Docker, DevOps, Education, Senior, Marketing, Golang and Engineer jobs that are similar:\n\n
$50,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for pganalyze and want to re-open this job? Use the edit link in the email when you posted the job!
At pganalyze, we redefine the user experience for optimizing the performance of Postgres databases. Our product helps customers such as Atlassian, Robinhood and DoorDash to understand complex Postgres problems and performance issues.\n\nApplication developers use pganalyze to get deep insights into complex database behaviors. Our product is heavy on automated analysis and custom visualizations, and makes automatic recommendations, such as suggesting the best index to create for a slow query.\n\nYou will enjoy working at pganalyze if you are a software craftsperson at heart, who cares about writing tools for developers. You will take new features from idea to production deployment end-to-end within days. Your work will regularly involve writing or contributing to open-source components as well as the Postgres project.\n\nWe are a fully remote company, with the core team based in the San Francisco Bay Area. Our company is bootstrapped and profitable. We emphasize autonomy and focus time by having few meetings per week.\n\n### About the role\n\nYour core responsibility: To develop and optimize our Postgres statistics and analysis pipeline, end-to-end, and work on the processes that generate automated insights from the complex data set. This work involves having a detailed understanding of the core data points that are collected from the source Postgres database as a timeseries, optimizing how they get retrieved, transported to the pganalyze servers, and then processed and analyzed.\n\nToday, this data pipeline is a combination of open-source Go code (in the [pganalyze collector](https://github.com/pganalyze/collector)), and statistics processing written in Ruby. You will be responsible for improving this pipeline, introducing new technologies, including a potential rewrite of the statistics processing in Rust.\n\nSome of the work will lead into the depths of Postgres code, and you might need to compile some C code, or understand how the pganalyze parser library, [pg_query](https://pganalyze.com/blog/pg-query-2-0-postgres-query-parser), works in detail.\n\nYour work is the foundation of the next generation of pganalyze, with a focus on the automatic insights we can derive from the workload of the monitored Postgres databases, and giving fine-tuned recommendations such as which indexes to create, or which config settings to tune.\n\n#### At pganalyze, you will:\n\n* Collaborate with other engineers on shipping new functionality end-to-end, and ensure features are performant and well implemented\n* Be the core engineer for the foundational components of pganalyze, such as the statistics pipeline that processes all data coming into the product\n* Develop new functionality that monitors additional Postgres statistics, or derives new insights from the existing time series information\n* Write Ruby, Go or Rust code on the pganalyze backend and the pganalyze collector\n* Evaluate and introduce new technologies, such as whether we should utilize Rust in more places of the product\n* Optimize the performance of pganalyze components, using language-specific profilers, or Linux tools like โperfโ\n* Scale out our backend, which relies heavily on Postgres itself for statistics storage\n* Contribute to our existing open-source projects, such as pg_query, or create new open-source projects in the Postgres space\n* Work with upstream communities, such as the Postgres project, and contribute code back\n\n#### Previously, you have:\n\n* Worked professionally for at least 5 years as a software engineer\n* Written complex, data heavy backend code with Rust, Go, Ruby or Python\n* Used Postgres for multiple projects, are comfortable writing SQL, and are familiar with โEXPLAINโ\n* Created indexes on a Postgres database based on a query being slow\n* Looked at the source for a complex open-source project to chase a hard to understand bug\n* Written code that fetches data and/or interacts with cloud provider APIs\n* Structured your work and set your schedule to optimize for your own productivity\n\n#### Optionally, you may also have:\n\n* Written low-level C code, for fun\n* Used Protocol Buffers, FlatBuffers, msgpack or Capโn Proto to build your own APIs\n* Analyzed patterns in time series data and run statistical analysis on the data\n* Experimented with ML frameworks to analyze complex data sets\n* Optimized a data-heavy application built on Postgres\n* Written your own Postgres extensions\n* Used APM and tracing tools to understand slow requests end-to-end\n\n#### You could also be familiar with:\n\n* Building your own Linux system from scratch\n* The many [regards](https://twitter.com/regardstomlane) of Tom Lane on the Postgres mailing list\n* Reproducible builds, and why it would be really nice to have them, like yesterday \n\nPlease mention the words **TREAT PHOTO TOAST** when applying to show you read the job post completely (#RMy4yMi40Mi4xNA==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$140,000 — $180,000/year\n
\n\n#Location\nUnited States / Canada
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
**About the Opportunity**\n\nLaskie is partnered with a Series-C stage international digital freight platform, who is hiring two Latin America-based Backend Go Engineers. There's an opportunity for both a Mid-level and Senior Engineer. For each role, this person will have a passion for disrupting the industry and play a key role in building solutions for a one stop shop for carriers across the United States. You will be supporting a rapidly growing tech company with award-winning solutions focused on moving more with less.\n\nYou will create, develop and maintain services written mainly in Go to support products for carriers. The day-to-day consists of implementing integrations, developing new features, and planning the evolution of existing features to enable the product to grow sustainably. \n\n**The Right Fit**\n\nIf you are passionate about solving problems at scale, you will be successful here. You are fearless in pursuit of reinventing the future of freight through solving inefficiencies in the industry. \n\n**Responsibilities**\n* Plan, design and implement software written in Go\n* Partner with Product to guide the specification of new features and software\n* Assist with supporting other teams as needed\n\n**Skills & Qualifications**\n* 3 years experience writing software in Go for Senior-level (2 years for mid-level)\n* 6 years experience writing software for web applications for Senior-level (3 years for mid-level)\n* Proven ability to lead implementation of new features and software\n* Proven ability to create system architecture that is fault tolerant and scalable\n* Knowledge of message brokers, asynchronous code execution, concurrency and parallelism\n* Skilled with version control software (such as Git)\n* Experience with Linux OS\n* Experience with AWS ecosystem (RDS, Kinesis, API Gateway)\n* Experience with GCP is a plus\n\n**What you will find here**\n* Generous stock options\n* Opportunity to work with modern, cutting-edge technologies\n* Mind and body initiatives (Work out platform, Yoga classes, activity challenges) \n\nPlease mention the words **MINIMUM DEBATE ROOM** when applying to show you read the job post completely (#RMy4yMi40Mi4xNA==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$30,000 — $90,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
\n\n#Location\nLatin America
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Snowplow Analytics and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
Site Reliability Engineer (AWS)\nRemote, located in the UTC +/- 2 region \n\nAt Snowplow, we are on a mission to empower people to differentiate with data. We provide the technology to enable our customers to take control of their data and empower them to do amazing things with it.\n\nThere are tens of thousands of pipelines using our open source pipeline worldwide, collecting data emitted from over half a million sites. Running on AWS and GCP data technologies, it is ideal for data teams who want to manage their data in real-time and in their own cloud. We also collect, validate, enrich and load in the region of 5 billion events for our customers each day and help them on their Snowplow journey through our management console. \n\nTo support our ongoing growth, we are now looking for an experienced Site Reliability Engineer (SRE) to join our Tech Ops Team. You’ll be taking the lead on all things AWS including development and improvements of the current stack and rolling out new features - all whilst keeping these environments running smoothly. We would love to hear from you if the idea of programmatically controlling thousands of remote production environments excites you!.\n\nThe Opportunity: \n\nOur Private SaaS offering has grown significantly over the past year and we now orchestrate and monitor Snowplow event pipelines across hundreds of customer-owned AWS & GCP sub-accounts. Each account has its own individualised and optimised stack and all are capable of processing many billions of events per month.\n\nWe are looking for another SRE to help us grow to managing 1,000 and then 10,000 AWS, GCP and (in the future) Azure accounts. You will be pioneering solutions to managing estates of this size through cutting edge monitoring and automation. You’ll work closely with our Tech Ops Lead on all aspects of our proprietary deployment, orchestration and monitoring stacks.\n\nTech Ops has two areas of responsibility: the centralised services we provide customers and their pipeline infrastructure hosted in their own AWS or GCP accounts. Within both domains we are striving to increase service reliability, fulfil customer requests in a timely fashion, and automate recurring tasks. Task automation is essential as our customer base grows, because our infrastructure estate scales linearly with our customer numbers, unlike most software businesses.\n\nThe challenge of automating the maintenance and deployment of thousands of individualised stacks is an enormously ambitious undertaking and a hugely exciting infrastructure automation challenge you’re unlikely to find anywhere else!\n\nThe environment you’ll be working in:\n\nOur company values are Transparency, Honesty, Ownership, Inclusivity, Empowerment, Customer-centricity, Growth and Technical Excellence. These aren’t just words we plucked out of thin air, we came up with them together as a company and are continually looking to find new ways to weave these into our day to day operations. From flexible hours and working locations to the way we give feedback, we’re passionate about building a company that supports both company and individual development.\n\nWhat you’ll be doing:\n\n- Strategising and innovating around the Private SaaS model, helping Snowplow to plan for the future. \n- Collaborating closely with our team of SREs and other teams around the business to ensure we continue to provide our customers with an excellent service. \n- Maintaining and developing our growing Terraform infrastructure-as-code stacks which we use to deploy infrastructure for all internal and client use cases\n- Maintaining our internal infrastructure stacks which include the HashiCorp suite as well as our Snowplow Insights UI and VPNs\n- Improving the resilience and healing ability of our infrastructure estate.\n- Owning your share of support tickets and participation in an on-call rotation to help mitigate anything else and help us serve our client base 24/7.\n- Being a key part of our response to high-severity internal or customer incidents, ensuring we meet all SLAs\n\nWhat you bring to the team:\n\n- Has worked with AWS in a production capacity - experience in GCP and/or Azure is a bonus\n- Has worked with Terraform, CloudFormation or some form of infrastructure-as-code tooling\n- Any experience with the HashiCorp stack (Vault, Consul, Nomad) and understanding their role in infrastructure automation is a bonus\n- Has worked with Docker and is familiar with container-based architectures\n- Knowledgeable about the Linux operating system and how to manage servers in a production capacity\n- Knowledgeable about Cloud networking principles and how to troubleshoot issues in this space\n- Comfortable scripting in one or more of: Bash, Python, Ruby or Perl\n- Comfortable programming in one or more of: Java, Scala, Golang or Python\n\nWhat you’ll get in return\n\n- A competitive package based on experience, including share options\n- 25 days of holiday a year (plus bank holidays)\n- MacBook or Dell XPS 13/15\n- Two fantastic company Away Weeks in a different European city each year (the last one was in November 2019 in Bratislava)\n- Work alongside a supportive and talented team with the opportunity to work on cutting edge technology and challenging problems\n- Grow and develop in a fast-moving, collaborative organisation\n- Enjoy fun events in and around London organised by our Cultural Work Committee\n- If based in London, convenient office location in central London (Aldgate) and a continuous supply of Pact coffee and healthy snacks \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Admin, Engineer, Sys Admin, Cloud, Ruby, Stats, SaaS, Golang and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Snowplow Analytics and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
Site Reliability Engineer (AWS)\nRemote, located in the UTC +/- 2 region \n\nAt Snowplow, we are on a mission to empower people to differentiate with data. We provide the technology to enable our customers to take control of their data and empower them to do amazing things with it.\n\nThere are tens of thousands of pipelines using our open source pipeline worldwide, collecting data emitted from over half a million sites. Running on AWS and GCP data technologies, it is ideal for data teams who want to manage their data in real-time and in their own cloud. We also collect, validate, enrich and load in the region of 5 billion events for our customers each day and help them on their Snowplow journey through our management console. \n\nTo support our ongoing growth, we are now looking for an experienced Site Reliability Engineer (SRE) to join our Tech Ops Team. You’ll be taking the lead on all things AWS including development and improvements of the current stack and rolling out new features - all whilst keeping these environments running smoothly. We would love to hear from you if the idea of programmatically controlling thousands of remote production environments excites you!.\n\nThe Opportunity: \n\nOur Private SaaS offering has grown significantly over the past year and we now orchestrate and monitor Snowplow event pipelines across hundreds of customer-owned AWS & GCP sub-accounts. Each account has its own individualised and optimised stack and all are capable of processing many billions of events per month.\n\nWe are looking for another SRE to help us grow to managing 1,000 and then 10,000 AWS, GCP and (in the future) Azure accounts. You will be pioneering solutions to managing estates of this size through cutting edge monitoring and automation. You’ll work closely with our Tech Ops Lead on all aspects of our proprietary deployment, orchestration and monitoring stacks.\n\nTech Ops has two areas of responsibility: the centralised services we provide customers and their pipeline infrastructure hosted in their own AWS or GCP accounts. Within both domains we are striving to increase service reliability, fulfil customer requests in a timely fashion, and automate recurring tasks. Task automation is essential as our customer base grows, because our infrastructure estate scales linearly with our customer numbers, unlike most software businesses.\n\nThe challenge of automating the maintenance and deployment of thousands of individualised stacks is an enormously ambitious undertaking and a hugely exciting infrastructure automation challenge you’re unlikely to find anywhere else!\n\nThe environment you’ll be working in:\n\nOur company values are Transparency, Honesty, Ownership, Inclusivity, Empowerment, Customer-centricity, Growth and Technical Excellence. These aren’t just words we plucked out of thin air, we came up with them together as a company and are continually looking to find new ways to weave these into our day to day operations. From flexible hours and working locations to the way we give feedback, we’re passionate about building a company that supports both company and individual development.\n\nWhat you’ll be doing:\n\n- Strategising and innovating around the Private SaaS model, helping Snowplow to plan for the future. \n- Collaborating closely with our team of SREs and other teams around the business to ensure we continue to provide our customers with an excellent service. \n- Maintaining and developing our growing Terraform infrastructure-as-code stacks which we use to deploy infrastructure for all internal and client use cases\n- Maintaining our internal infrastructure stacks which include the HashiCorp suite as well as our Snowplow Insights UI and VPNs\n- Improving the resilience and healing ability of our infrastructure estate.\n- Owning your share of support tickets and participation in an on-call rotation to help mitigate anything else and help us serve our client base 24/7.\n- Being a key part of our response to high-severity internal or customer incidents, ensuring we meet all SLAs\n\nWhat you bring to the team:\n\n- Has worked with AWS in a production capacity - experience in GCP and/or Azure is a bonus\n- Has worked with Terraform, CloudFormation or some form of infrastructure-as-code tooling\n- Any experience with the HashiCorp stack (Vault, Consul, Nomad) and understanding their role in infrastructure automation is a bonus\n- Has worked with Docker and is familiar with container-based architectures\n- Knowledgeable about the Linux operating system and how to manage servers in a production capacity\n- Knowledgeable about Cloud networking principles and how to troubleshoot issues in this space\n- Comfortable scripting in one or more of: Bash, Python, Ruby or Perl\n- Comfortable programming in one or more of: Java, Scala, Golang or Python\n\nWhat you’ll get in return\n\n- A competitive package based on experience, including share options\n- 25 days of holiday a year (plus bank holidays)\n- MacBook or Dell XPS 13/15\n- Two fantastic company Away Weeks in a different European city each year (the last one was in November 2019 in Bratislava)\n- Work alongside a supportive and talented team with the opportunity to work on cutting edge technology and challenging problems\n- Grow and develop in a fast-moving, collaborative organisation\n- Enjoy fun events in and around London organised by our Cultural Work Committee\n- If based in London, convenient office location in central London (Aldgate) and a continuous supply of Pact coffee and healthy snacks \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Admin, Engineer, Sys Admin, Cloud, Ruby, Stats, SaaS, Golang and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for VividCortex Database Performance Monitoring and want to re-open this job? Use the edit link in the email when you posted the job!
*Only candidates residing inside of the United States will be considered for this role*\n\nAbout VividCortex\n\nVividCortex provides deep database performance monitoring to drive speed, efficiency and savings. Our cloud-based SaaS platform offers full visibility into major open source databases – MySQL, PostgreSQL, Amazon Aurora, MongoDB, and Redis – at any scale without overhead. By giving entire engineering teams the ability to monitor database workload and query behavior, VividCortex empowers them to improve application speed, efficiency, and up-time.\n\nFounded in 2012, and headquartered in the Washington, DC metro area with remote teams in the US and abroad, our company’s growth continues to accelerate (#673 Inc. 5000). Hundreds of industry leaders like DraftKings, Etsy, GitHub, SendGrid, Shopify, and Yelp rely on VividCortex. \n\nWe know our team is our greatest strength so we support our people with excellent benefits including 401k, professional development assistance, flexible paid leave (vacation, parental, sick, etc.), and a health/wellness benefit. We enjoy getting together and giving back to the community through volunteer services. We believe in offering every employee the tools and opportunity to impact the business in a positive way. We care about inclusiveness and working with people who help us learn and grow.\n\nAbout the Role\n\nVividCortex is looking for an experienced Data Engineer to architect and build our next-generation internal data platform for large scale data processing. You are at the intersection of data, engineering, and product, and run the strategy and tactics of how we store and process massive amounts of performance metrics and other data we measure from our customers' database servers.\n\nOur platform is written in Go and hosted on the AWS cloud. It uses Kafka, Redis, and MySQL for data storage and analysis. We are a DevOps organization building a 12-factor microservices application; we practice small, fast cycles of rapid improvement and full exposure to the entire infrastructure, but we don't take anything to extremes.\n\nThe position offers excellent benefits, a competitive base salary, and the opportunity for equity. Diversity is important to us, and we welcome and encourage applicants from all walks of life and all backgrounds.\n\n\n\n\n\nResponsibilities:\n\n\n\n\n* Work with others to define, and propose for approval, a modern data platform design strategy and matching architecture and technology choices to support it, with the goals of providing a highly scalable, economical, observable, and operable data platform for storing and processing very large amounts of data within tight performance tolerances.\n\n* Perform high-level strategy and hands-on infrastructure development for the VividCortex data platform, developing and deploying new data management services both in our existing data center infrastructure, and in AWS.\n\n* Collaborate with engineering management to drive data systems design, deployment strategies, scalability, infrastructure efficiency, monitoring, and security.\n\n* Discover, define, document, and design scalable backend storage and robust data pipelines for different types of data streams.\n\n* Write code, tests, and deployment manifests and artifacts, using CircleCI, Git and GitHub, pull requests, issues, etc. Collaborate with other engineers on code review and approval.\n\n* Measure and improve the code and system performance and availability as it runs in production.\n\n* Support product management in prioritizing and coordinating work on changes to our data platform, and serve as a lead on user-focused technical requirements and analysis of the platform.\n\n* Help provide customer support, and you'll pitch in with other departments, such as Sales, as needed.\n\n* Rotate through on-call duty.\n\n* Understand and enact our security posture and practices.\n\n* Continually seek to understand and improve performance, reliability, resilience, scalability, and automation. Our goal is that systems should scale linearly with our customer growth, and the effort of maintaining the systems should scale sub-linearly.\n\n* Contribute to a culture of blameless learning, responsibility, and accountability.\n\n* Manage your workload, collaborating and working independently as needed, keeping management appropriately informed of progress and issues.\n\n\n\n\n\n\n\n\n\n\nPreferred Qualifications:\n\n\n\n\n* Experience building systems for both structured and unstructured data.\n\n* AWS infrastructure development experience.\n\n* Mastery of relational database technologies such as MySQL.\n\n* You are collaborative, self-motivated, and experienced in the general development, deployment, and operation of modern API-powered web applications using continuous delivery and Git in a Unix/Linux environment.\n\n* Experience and knowledge programming in Golang or Java\n\n* You have experience resolving highly complex data infrastructure design and maintenance issues, with at least 4 years of data-focused design and development experience.\n\n* You are hungry for more accountability and ownership, and for your work to matter to users.\n\n* You’re curious with a measured excitement about new technologies.\n\n* SaaS multitenant application experience.\n\n* Ability to understand and translate customer needs into leading-edge technology.\n\n* Experience with Linux system administration and enterprise security.\n\n* A Bachelor’s degree in computer science, another engineering discipline, or equivalent experience.\n\n\n\n\n\n\n\n\nNote to Agencies and Recruiters: VividCortex does not engage with unsolicited contact from agencies or recruiters. Unsolicited resumes and leads are property of VividCortex and VividCortex explicitly denies that any information sent to VividCortex can be construed as consideration. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, DevOps, Amazon, Git, SaaS, Golang, Linux and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Snowplow Analytics and want to re-open this job? Use the edit link in the email when you posted the job!
\nSite Reliability Engineer (AWS)\n\nRemote, ideally located in the UTC -4 to -6 region \n\nSnowplow enables you to track any event data; ask any question of that data and use any tool you want to answer it. We want to empower people and companies to do transformative things using data. As a company, we have almost doubled in size over the last 18 months and we’re not looking to slow down.\n\nTo support our growth, we are now looking for an SRE (Site Reliability Engineer) to join our Tech Ops Team. You’ll be taking the lead on all things AWS including development and improvements of the current stack and rolling out new features - all whilst keeping these environments running smoothly. We would love to hear from you if the idea of programmatically controlling thousands of production environments excites you!\n\nThe Opportunity: \n\nOur Private SaaS offering has grown significantly over the past year and we now orchestrate and monitor Snowplow event pipelines across more than 150 customer-owned AWS & GCP sub-accounts. Each account has its own individualised and optimised stack and all are capable of processing many billions of events per month.\n\nWe are looking for another SRE to help us grow to managing 1,000 and then 10,000 AWS, GCP & Azure accounts. You will be pioneering solutions to managing estates of this size through cutting edge monitoring and automation. You’ll work closely with our Tech Ops Lead on all aspects of our proprietary deployment, orchestration and monitoring stacks.\n\nTech Ops has two areas of responsibility: the centralised services we provide customers and their pipeline infrastructure hosted in their own AWS or GCP accounts. Within both domains we are striving to increase service reliability, fulfil customer requests in a timely fashion, and automate recurring tasks. Task automation is essential as our customer base grows, because our infrastructure estate scales linearly with our customer numbers, unlike most software businesses.\n\nThe challenge of automating the maintenance and deployment of thousands of individualised stacks is an enormously ambitious undertaking and a hugely exciting infrastructure automation challenge!\n\nThe environment you’ll be working in:\n\nOur company values are Transparency, Honesty, Ownership, Inclusivity, Empowerment, Customer-centricity, Growth and Technical Excellence. These aren’t just words we plucked out of thin air, we came up with them together as a company and are continually looking to find new ways to weave these into our day to day operations. From flexible hours and working locations to the way we give feedback, we’re passionate about building a company that supports both company and individual development.\n\nWhat you’ll be doing:\n\n\n* Maintaining and developing our growing Terraform infrastructure-as-code stacks which we use to deploy infrastructure for all internal and client use cases\n\n* Maintaining our internal infrastructure stacks which include the HashiCorp suite as well as our Snowplow Insights UI and VPNs\n\n* Participating in our on-call rotation to help us serve our client base 24/7\n\n* Taking rotations of L3 Technical Support where you will be responsible for triaging and dealing with infrastructure issues\n\n* Handling high-severity internal or customer incidents, ensuring we meet all SLAs\n\n\n\n\nWhat you bring to the team:\n\n\n* Has worked with AWS in a production capacity - experience in GCP and/or Azure is a bonus\n\n* Has worked with Terraform, CloudFormation or some form of infrastructure-as-code tooling\n\n* Any experience with the HashiCorp stack (Vault, Consul, Nomad) and understanding their role in infrastructure automation is a bonus\n\n* Has worked with Docker and is familiar with container-based architectures\n\n* Knowledgeable about the Linux operating system and how to manage servers in a production capacity\n\n* Knowledgeable about Cloud networking principles and how to troubleshoot issues in this space\n\n* Comfortable scripting in one or more of: Bash, Python, Ruby or Perl\n\n* Comfortable programming in one or more of: Java, Scala, Golang or Python\n\n\n\n\nWhat you’ll get in return:\n\n\n* A competitive package based on experience, including share options 25 days of holiday a year (plus bank holidays)\n\n* MacBook or Dell XPS 13/15\n\n* Two fantastic company Away Weeks in a different European city each year (the last one was in May 2019 in Lisbon)\n\n* Work alongside a supportive and talented team with the opportunity to work on cutting edge technology and challenging problems\n\n* Grow and develop in a fast-moving, collaborative organisation\n\n* Enjoy fun events in and around London organised by our Cultural Work Committee\n\n* If based in London, convenient office location in central London (Shoreditch) and a continuous supply of Pact coffee and healthy snacks\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Admin, Engineer, Sys Admin, Cloud, Ruby, Stats, SaaS, Golang and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for SecurityScorecard and want to re-open this job? Use the edit link in the email when you posted the job!
\nLEAD DEVOPS ENGINEER\nAbout us\nFounded in NYC, SecurityScorecard (www.securityscorecard.com) ย helps companies regain control and visibility of their partner ecosystem. ย ย By continuously collecting millions of proprietary security signals, we non-intrusively benchmark company security performance to their peers and industry. ย Our SaaS platform continues to be adopted by industry leaders around the globe to vet their vendor ecosystem. ย \nYou can see pictures from our day here:ย http://on.fb.me/1Sig7Dj\nWe are a startup in a hot market looking to grow our team to meet the demand that our customers and prospects are driving. ย We are using cutting edge technology and handling large amounts of data. We also offer a benefits package, including medical, dental, vision, stock options, among many other cool perks. ย Add to that, team building outings, Nerf-gun battles, coffee and some of the smartest minds in NY!\nThe CISO/CTO co-founders are encoded with Security DNA, and have spent many years running security and technology teams for startup and fortune 500 companies. Their disruptive technology is gaining incredible market traction. ย The company continues to scale and looking for bright, talented individuals to join the team and build a great company. ย The company is well funded by notable, high-profile technology investors with deep security expertise.\nSummary\nWe're looking for a Linux ninja with a passion for automation. Fully proficient in Ansible. Experience with deploying Rails and Golang. Experience writing build scripts in Capistrano. Experience administrating redis and elastic search clusters. Strong AWS skills, security groups, load balancers, RDS. Strong bash skills. Strong Linux security skills.\nRequirements\n* \nBS or MS in Computer Science, Engineering, Operational Research, Statistics, or other quantitative field of study.\n* \nExemplifies an easy-to-get-along personality, collaborates well with colleagues even under challenging circumstances, and values team success.\n* \nStrong linux skills\n* \nAnsible\n* \nAWS - ec2 and RDS\n* \nMYSQL / RAILS / REDIS / Elastic Search\n* \nCapistrano, Zabbix\n* \nSuper strong Bash, Ruby scripting\n* \nAbility to learn and dev in Golang\n\n\nResponsibilities\n- Maintain linux servers\n- Automate server setup using ansible\n- Automate setup of various in house applications\n- Debugging production issues at scale\n- Writing build scripts for various rails applications in capistrano\n- Automate various monthly procedures\n- Debug distributed job queues and workers\nBenefits & Culture\nLocated in the heart of NYC, SecurityScorecard is rapidly becoming one of the hottest, most disruptive start-ups in New York. ย Backed by biggest VCs and ย the most well respected business and thought-leaders, were scaling at an incredibly rapid pace and looking for agile, motivated team members to join us in building an incredible company culture!\nBenefits\n* \nHealth and commuter benefits\n* \nUnlimited vacation\n* \nFlexible schedule\n* \nUncapped opportunities for growth and skill development\n* \nWork in a new, modern office space (Your foosball and table tennis skills will improve!)\n\n\n\n\nย \nCulture\n* \nYou are a full member of a team. Work with Ph.D.s in computer science, and industry executives.\n* \nAgile, hyper-paced environment\n* \nWe love keeping customers excited and delighted\n* \nWe love to build and automate from the ground up\n* \nWe always learn and experiment, adopting new ideas very fast\n* \nWe love to take risks and learn quickly from mistakes\n\n\nWere looking forward to connecting with you.ย ย ย ย ย ย \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, InfoSec, Finance, Elasticsearch, Ruby, Admin, Golang, Medical, Engineer, Linux, Sys Admin, Design, Executive, Digital Nomad and SaaS jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.