\nOver the past year we have built a next-generation multi-cell architecture and we are looking for a Senior Software Engineer ll to join the team and help add capabilities and migrate customers to the new deployments.\n\nThe cell-based architecture is a large-scale, worldwide distributed system, and this team has an outsize impact on every customer of dbt Labs. Today we serve some of the largest data-driven organizations in the world, enabling them to make decisions based on the knowledge at the core of their business. The quality, reliability, and performance our multi-cell implementation equates to leverage for analysts, analytics engineers, and data engineers in organizations of all shapes and sizes.\nIn this role, you can expect to:\n\n\n* Build cell-based application architecture that reliably and performantly delivers dbt Cloud to customers worldwide. You will work on a variety of technologies and features including our regional service layer, enabling self-service accounts across regions, cell migrations and product security.\n\n* Collaborate with multiple engineering teams, Product Management, Security, and Customer Support.\n\n* Work with a variety of programming languages, systems, and technologies, including: Golang, Python, Postgres, Kubernetes, Terraform, Auth0, and Datadog.\n\n* Drive scaling and automation initiatives.\n\n* Define tradeoffs and make decisions about what, how, and when we build. We are a fast-moving startup and building the right platform at the place where application and infrastructure meet unlocks reliability, quality, and productivity for the long term.\n\n\n\nQualifications:\n\n\n* Have 7+ years experience in software engineering, including production experience supporting SaaS applications.\n\n* Minimum requirement of Bachelors degree in related field (computer science, computer engineering, etc.) OR\n\n* Completed enrollment in engineering related bootcamp.\n\n\n\nYou are a good fit if you:\n\n\n* Have implemented large-scale distributed systems and have a deep interest in application performance, scalability, reliability, and operability.\n\n* Have designed and built cloud applications that include containerized workloads, Python or Golang, and at least some of our technology stack. You donโt need to be experienced with every technology we use today.\n\n* Have a systematic problem-solving approach coupled with strong communication skills and a sense of ownership and drive.\n\n* Ensure high programming standards in your team by writing unit, functional, and integration tests and participating in timely, constructive code review.\n\n* Comfortable operating in fast paced environment the emphasizes making small changes to rapidly iterate, learn and deliver.\n\n* You are interested in our mission and values. You are inspired to drive progress in the data and analytics ecosystem.\n\n\n\nYou'll have an edge if you:\n\n\n* Have excellent written communication skills. We are a remote-first company that uses writing to facilitate decision-making.\n\n* Have experience with technical leadership.\n\n\n\nCompensation and Benefits:\n\n\n* Salary: $180,000-$235,000 USD\n\n* Equity Stake*\n\n* Benefits - dbt Labs offers:\n\n\n* Unlimited vacation (and yes we use it!)\n\n* 401k w/3% guaranteed contribution\n\n* Excellent healthcare\n\n* Paid Parental Leave\n\n* Wellness stipend\n\n* Home office stipend, and more!\n\n\n\n\n\n\n\n\nWhat to expect in the hiring process (all video interviews unless accommodations are needed):\n\n\n* Interview with a Talent Acquisition Partner \n\n* Technical Interview with Hiring Manager\n\n* Team Interviews \n\n* Final interview with leadership team member\n\n\n\n\n#LI-RC1\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nOver the past year we have built a next-generation multi-cell architecture and we are looking for a Senior Software Engineer ll to join the team and help add capabilities and migrate customers to the new deployments.\n\nThe cell-based architecture is a large-scale, worldwide distributed system, and this team has an outsize impact on every customer of dbt Labs. Today we serve some of the largest data-driven organizations in the world, enabling them to make decisions based on the knowledge at the core of their business. The quality, reliability, and performance our multi-cell implementation equates to leverage for analysts, analytics engineers, and data engineers in organizations of all shapes and sizes.\nIn this role, you can expect to:\n\n\n* Build cell-based application architecture that reliably and performantly delivers dbt Cloud to customers worldwide. You will work on a variety of technologies and features including our regional service layer, enabling self-service accounts across regions, cell migrations and product security.\n\n* Collaborate with multiple engineering teams, Product Management, Security, and Customer Support.\n\n* Work with a variety of programming languages, systems, and technologies, including: Golang, Python,+++ Postgres, Kubernetes, Terraform, Auth0, and Datadog.\n\n* Drive scaling and automation initiatives.\n\n* Define tradeoffs and make decisions about what, how, and when we build. We are a fast-moving startup and building the right platform at the place where application and infrastructure meet unlocks reliability, quality, and productivity for the long term.\n\n\n\nQualifications:\n\n\n* Have 7+ years experience in software engineering, including production experience supporting SaaS applications.\n\n* Minimum requirement of Bachelors degree in related field (computer science, computer engineering, etc.) OR\n\n* Completed enrollment in engineering related bootcamp.\n\n\n\nYou are a good fit if you:\n\n\n* Have implemented large-scale distributed systems and have a deep interest in application performance, scalability, reliability, and operability.\n\n* Have designed and built cloud applications that include containerized workloads, Python or Golang, and at least some of our technology stack. You donโt need to be experienced with every technology we use today.\n\n* Have a systematic problem-solving approach coupled with strong communication skills and a sense of ownership and drive.\n\n* Ensure high programming standards in your team by writing unit, functional, and integration tests and participating in timely, constructive code review.\n\n* Comfortable operating in fast paced environment the emphasizes making small changes to rapidly iterate, learn and deliver.\n\n* You are interested in our mission and values. You are inspired to drive progress in the data and analytics ecosystem.\n\n\n\nYou'll have an edge if you:\n\n\n* Have excellent written communication skills. We are a remote-first company that uses writing to facilitate decision-making.\n\n* Have experience with technical leadership.\n\n\n\nCompensation and Benefits:\n\n\n* Salary: $180,000-$235,000 USD\n\n* Equity Stake*\n\n* Benefits - dbt Labs offers:\n\n\n* Unlimited vacation (and yes we use it!)\n\n* 401k w/3% guaranteed contribution\n\n* Excellent healthcare\n\n* Paid Parental Leave\n\n* Wellness stipend\n\n* Home office stipend, and more!\n\n\n\n\n\n\n\n\nWhat to expect in the hiring process (all video interviews unless accommodations are needed):\n\n\n* Interview with a Talent Acquisition Partner \n\n* Technical Interview with Hiring Manager\n\n* Team Interviews \n\n* Final interview with leadership team member\n\n\n\n\n#LI-RC1\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nRestaurant365 is a SaaS company disrupting the restaurant industry! Our cloud-based platform provides a unique, centralized solution for accounting and back-office operations for restaurants. Restaurant365โs culture is focused on empowering team members to produce top-notch results while elevating their skills. Weโre constantly evolving and improving to make sure we are and always will be โBest in Classโ ... and we want that for you too!\n\n\nRestaurant365 is looking for an experienced Data Engineer to join our data warehouse team thatenables the flow of information and analytics across the company. The Data Engineer will participate in the engineering of our enterprise data lake, data warehouse, and analytic solutions. This is a key role on a highly visible team that will partner across the organization with business and technical stakeholders to create the objects and data pipelines used for insights, analysis, executive reporting, and machine learning. You will have the exciting opportunity to shape and grow with a high performingteam and the modern data foundation that enables the data-driven culture to fuel the companyโs growth. \n\n\n\nHow you'll add value: \n* Participate in the overall architecture, engineering, and operations of a modern data warehouse and analytics platforms. \n* Design and develop the objects in the Data Lake and EDW that serve as core building blocks for the semantic layer and datasets used for reporting and analytics across the enterprise. \n* Develop data pipelines, transformations (ETL/ELT), orchestration, and job controls using repeatable software development processes, quality assurance, release management, and monitoring capabilities. \n* Partner with internal business and technology stakeholders to understand their needs and then design, build and monitor pipelines that meet the companyโs growing business needs. \n* Look for opportunities for continuous improvements that automate workflows, reduce manual processes, reduce operational costs, uphold SLAs, and ensure scalability. \n* Use an automated observability framework for ensuring the reliability of data quality, data integrity, and master data management. \n* Partner closely with peers in Product, Engineering, Enterprise Technology, and InfoSec teams on the shared enterprise needs of a data lake, data warehouse, semantic layer, transformation tools, BI tools, and machine learning. \n* Partner closely with peers in Business Intelligence, Data Science, and SMEs in partnering business units o translate analytics and business requirements into SQL and data structures \n* Responsible for ensuring platforms, products, and services are delivered with operational excellence and rigorous adherence to ITSM process and InfoSec policies. \n* Adopt and follow sound Agile practices for the delivery of data engineering and analytics solutions. \n* Create documentation for reference, process, data products, and data infrastructure \n* Embrace ambiguity and other duties as assigned. \n\n\n\nWhat you'll need to be successful in this role: \n* 3-5 years of engineering experience in enterprise data warehousing, data engineering, business intelligence, and delivering analytics solutions \n* 1-2 years of SaaS industry experience required \n* Deep understanding of current technologies and design patterns for data warehousing, data pipelines, data modeling, analytics, visualization, and machine learning (e.g. Kimball methodology) \n* Solid understanding of modern distributed data architectures, data pipelines, API pub/sub services \n* Experience engineering for SLA-driven data operations with responsibility for uptime, delivery, consistency, scalability, and continuous improvement of data infrastructure \n* Ability to understand and translate business requirements into data/analytic solutions \n* Extensive experience with Agile development methodologies \n* Prior experience with at least one: Snowflake, Big Query, Synapse, Data bricks, or Redshift \n* Highly proficient in both SQL and Python for data manipulation and assembly of Airflow DAGโs. \n* Experience with cloud administration and DevOps best practices on AWS and GCP and/or general cloud architecture best practices, with accountability cloud cost management \n* Strong interpersonal, leadership and communication skills, with the ability to relate technical solutions to business terminology and goals \n* Ability to work independently in a remote culture and across many time zones and outsourced partners, likely CT or ET \n\n\n\nR365 Team Member Benefits & Compensation\n* This position has a salary range of $94K-$130K. The above range represents the expected salary range for this position. The actual salary may vary based upon several factors, including, but not limited to, relevant skills/experience, time in the role, business line, and geographic location. Restaurant365 focuses on equitable pay for our team and aims for transparency with our pay practices. \n* Comprehensive medical benefits, 100% paid for employee\n* 401k + matching\n* Equity Option Grant\n* Unlimited PTO + Company holidays\n* Wellness initiatives\n\n\n#BI-Remote\n\n\n$90,000 - $130,000 a year\n\nR365 is an Equal Opportunity Employer and we encourage all forward-thinkers who embrace change and possess a positive attitude to apply. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, InfoSec, Python, Accounting, DevOps, Cloud, API and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Athenian and want to re-open this job? Use the edit link in the email when you posted the job!
**Position type:** Full-time employee\n\n**Seniority:** Senior Software Engineer \n\n**Location:** Remote (desired [time zone between UTC-3 and UTC+3](https://en.wikipedia.org/wiki/UTC_offset#/media/File:World_Time_Zones_Map.png))\n\n**Compensation:** 60kโ65k EUR/year (67k-73k USD/year) + stock options (both based on seniority level)\n\n\n## About Athenian\nAt [Athenian](athenian.co), our mission is to help engineering leaders build better software faster by leveraging metrics and insights. We provide a data-enabled engineering platform that offers end-to-end visibility into the software delivery lifecycle.\n\nWeโre committed to building a healthy team that welcomes a diverse range of backgrounds and experiences. We want people who care about our mission, are ready to grow, believe in our values, and want to make the people around them better.\n\nWe are a [team of 18](https://www.linkedin.com/company/athenian/) who are entirely remote across 7 countries in North America, Europe, and Asia. Weโre backed by amazing investors, and our customers, large and small, love working with us.\n\nWeโre growing quickly and are building something big together. Weโd love to hear from you!\n\n\n## About the Role\n\nAs a Senior Software Engineer of API at Athenian you can expect to have a big impact in shaping the product.\n\nYou will have the opportunity to work alongside our highly skilled team to design, build, and iterate on a world-class software web application.\n\nYou are expected to contribute to the API part of the backend. Athenian API is currently public on GitHub, and the corresponding OpenAPI specification is open source. We deploy the API in Google Kubernetes Engine.\n\nWe are developers building a product for other developers and we build our product with a sense of pride and ownership. You will be in a collaborative environment where you will work closely together with product and engineering to understand user needs, and discuss new ideas to solve complex problems.\n\n## Responsibilities\n \n\n- Co-own the API that is a cornerstone of the product providing the entire analytics engine.\n- Closely collaborate with the data retrieval and frontend developers. โOne API to glue them all.โ\n- Understand customersโ needs and propose ideas and discuss solutions innovating with the team on engineering and product.\n\n \n\n## Skills & Experience\n### Essential:\n- Full professional proficiency in English, written and spoken. The ability to communicate comes first, no matter the level of technical skills.\n- Strong experience with writing high performant, asynchronous, type hinted Python3 code.\n- Strong experience with writing high performant queries in PostgreSQL.รง\n- Strong knowledge of numpy.\n- Strong experience with Linux.\n- Experience with scalable backend design: load balancing, fault tolerance, etc.\n- Experience with writing Cython code.\n- Experience with OpenAPI.\n- Experience with pytest or alternative.\n- Experience with Continuous Integration and Continuous Delivery.\n- Strong knowledge of Git tools and concepts.\n- Knowledge of pandas.\n- Knowledge of basic mathematical & statistical concepts.\n- Knowledge of Docker, Kubernetes.\n- Familiarity with Google Cloud Platform or similar.\n\n\n### Desirable:\n\n- Experience with aiohttp or similar; SQLAlchemy.\n- Knowledge of C/C++ or Rust.\n- Experience with Go.\n- Experience with columnar DBs like Clickhouse, Druid.\n- Experience with Redis, memcached, or similar.\n- Experience with event-driven backend architectures.\n- Experience with GitHub Actions, Circle CI, and Jenkins.\n- Experiments with Machine Learning and/or Data Science.\n- Mathematical background.\n- Having worked remotely.\n- Having worked in a dynamic start-up environment.\n- Having worked on a SaaS product.\n- Having used modern collaboration tooling (Jira, GitHub, Slack, Zoom, etc.).\n\n\n\n### Profile:\n- Responsible and professional\n- Independent, goal-oriented, proactive attitude\n- Disciplined and communicative in remote environments\n- Collaborative and with a strong team-spirit\n- Curious and interested in learning new things\n\n\n\n## Hiring process\nThe hiring process is composed by multiple steps:\n\n1. CV review\n2. Screening Interview\n3. Technical Assessment project\n4. Technical Interview + Q&A\n5. Architecture design Interview + Manager Interview\n6. Communication of the outcome\n\n## Engineering at Athenian\nAt Athenian Engineering we are currently a team of 5, consisting of the Head of Engineering and 4 world-class Senior Engineers, each with a diverse area of expertise ranging from Language Analysis and System Architecture to Machine Learning on Code and modern APIs.\n\n\nWe collaborate with each other on a daily basis and we value each contribution and idea. We foster good collaboration through transparency and good communication, and we believe that teamwork is key to move fast and be successful.\n\n \n\n## Athenian Culture \nAthenian is a fully remote company. At the moment, we are 18 people from 7 different countries working closely together in a fully-distributed way.\n\nWe put a lot of value into collaboration and feedback, no matter if it comes from our CEO, a customer, Product or Engineering because we know that the best ideas can come from anywhere.\n\nWe believe in transparency and collaboration, which reflects how we operate internally and externally.\n\nWe are humane and care about each other's growth and wellbeing.\n\nFlexible hours, set your own schedule that fits you. \n\nPlease mention the word **REGARD** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMjU=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$60,000 — $80,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
\n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMjU=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for CircleBlack and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nWho are we?\n\nCircleBlack, Inc. provides financial advisors with technology that aggregates data, integrates other financial applications seamlessly, manages data from multiple custodians and delivers actionable intelligence about client portfolios, helping advisors better manage clients' wealth while growing and deepening advisor-client relationships. CircleBlack provides a leading platform built for the digital age, with a web-based and mobile application that can be taken anywhere and accessed anytime. CircleBlack's solution leverages proprietary technology that helps sustain the Company's unique competitive advantages. CircleBlack believes in making wealth management better, for both the investor and the advisor. For more information about CircleBlack, visit https://www.circleblack.com\n\n\nPosition Summary: \n\nWe are looking for a passionate, forward thinker Full-Stack Senior Software Engineer to design, develop and maintain our software solutions. You will be working on building quality performing software that enables financial advisors to deliver real time data to their clients while adapting to industry trends. Ideal candidates should be passionate about solving complex problems while being able to design, develop and support industry-leading solutions using Node.JS in a fast paced environment.\n\n\nResponsibilities:\n\n\n* Design and develop NodeJS APIs, integrations, analytics engines, and infrastructure tools.\n\n* Implement modern React user interfaces.\n\n* Lead migration from one core application to another, while proposing and implementing modern performance optimizations and scaling strategies, such as React user interface.\n\n* Drive software change while ensuring software deliverables comply with quality standards.\n\n* Collaborate effectively with stakeholders, designers and testers advising on impact, and performance to deliver the highest quality of software.\n\n* Perform code reviews, suggesting improvements and ensuring adherence to best practices.\n\n* Participate in an Agile development process.\n\n* Developing for a full stack of technologies including NodeJS, Nginx, React, Angular 1, MySQL, ElasticSearch, Kibana, PHP, Perl, Python and/or Ruby, Redis on AWS Linux servers.\n\n* Determine the root cause for complex software issues and develop practical solutions.\n\n* Serve as technical team lead and act as a mentor to allow for skill development through coaching, and training opportunities. \n\n\n\n\n\nCompetencies:\n\n\n* Ability to approach problems in a holistic manner, both tactical and strategic\n\n* Continuously aware of leveraging coaching and mentoring opportunities with junior software engineers \n\n* Creative, resourceful and outside- the- box thinking approach\n\n* Initiator; natural “fixer” mentality \n\n* Problem-solver and analytical\n\n\n\n\n\n\nEducation/Qualification:\n\n\n* 7+ years of application development experience; 4+ years experience using NodeJS. This is a must!\n\n* 2+ years of experience with MySQL database development\n\n* Experience building maintainable and testable code bases, including API and Database design in an agile environment and driving software change\n\n* Hands on experience integrating third-party SaaS providers using a variety of technologies including at least some the following: REST, SOAP, SAML, OAuth, OpenID, JWT, Salesforce\n\n* Experience working in a cloud environment, specifically AWS\n\n* Experience with non-relational databases such as Mongo, Redis, ElasticSearch\n\n* Ability to work independently, and remotely for the time being\n\n* BSc degree in Computer Science, Engineering or relevant field\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Engineer, Full Stack, Developer, Digital Nomad, React, Cloud, Python, Angular, API, Mobile, Junior, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Railnova and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nFront End Software Engineer (ReactJS)\nRemote job\nJob description\nRailnova is hiring an experienced front-end software engineer (Javascript/React) for our Railgenius software team to bring data analytics to railway end-users.\nThe Railgenius team is currently composed of a product manager, data scientist and back end engineers and leverages our UX/UI designer, infrastructure team and other product development teams at Railnova. We want to reinforce the Railgenius team and product with an experienced ReactJS developer to reinforce its product position as a stand-alone SaaS web product with a great user experience.\nOur customers are very engaged and never shy of feature suggestions, so you'll work with our UX/UI designer, product manager and support team to decide what to implement. You'll benefit from a lot of autonomy with a fast release cycle.\nReal examples of work the Railgenius team has done lately\n(That might help you to get a better idea of what this position job entails)\n\n\n* Implement a user-friendly interface for a complex event processing rule engine enabling our users to detect rolling stock failures in real time.\n\n* Build a powerful data inspector graphing tool to offer our clients a way to discover and graph multiple correlated signals in the browser.\n\n* Show the live, interpolated position (think Flightradar24 for trains) of trains along railway lines and custom map layers.\n\n* Optimize websockets bandwidth to cope with limited client browser capacity, while displaying hundreds of live sensors from a fleet of trains on a single page.\n\n* Design clever database models and API to express multi-tenant sharing of data and complex access permissions, to preserve privacy, security and intellectual property of each party in the data sharing process.\n\n* Talk directly to the customers to understand the desirability and the user fit of what is being built.\n\n* Recently, we started to use Figma front-end features to facilitate communication between UX designers, product managers and front end developers, and Storybook to reuse front-end components.\n\n\n\n\nExamples of what surrounding team members have done lately\n(The Railgenius team is multidisciplinary team as you can see)\n\n\n* Data scientists trained a physical model on 24 month of historic data spanning hundreds of GB on batteries to provide a predictor of battery health while train assets are parked, writing their own software and integrating it in the pipeline and the user front end.\n\n* Data scientists forecasted future usage of train locomotives by extracting past seasonality in our fine grained historical data, to better predict maintenance dates.\n\n* Data engineers optimised heavy SQL queries and indexes to offer great response time for time series querying and pattern search to our end-users.\n\n* Data engineers migrated our real-time complex event processing framework from a homemade Python base to Apache Kafka to help absorb peak traffic and increase availability.\n\n* The infrastructure team migrated most of our applications from bare metal servers to the AWS cloud in a few months in order to offer more reliability and improve the life the engineering team.\n\n\n \nRequirements\n\n\n* You are passionate about making an awesome product for end users.\n\n* You have a degree in computer science/engineering or any equivalent proven track record.\n\n* You are an experienced Javascript / ReactJS developer with familiarity with responsive design.\n\n* You can think critically about a UX design from your programmer perspective and have a good feel for usability and aesthetics\n\n* You have experience with back-end APIs, Python and SQL.\n\n* You are a good (written) communicator, you like working in a team, and speak to customers.\n\n\n\n\nWhat we offer\nWe want you to continue your personal development journey at Railnova. You'll be given space and time for deep focus on your work and be exposed to a technical and caring team and be given the opportunity to perfect your software engineering skills. On top of that, you'll get:\n\n\n* A choice of being either a full remote position (in Europe), or partial remote, or full time in our offices near Brussels South Train Station (when sanitary conditions allow for it). Railnova has a remote culture (we are big fans and users of Basecamp) with a few full time employees remote since day one.\n\n* 32 days of paid holidays.\n\n* Space to grow through deep focus on your work, one conference per year of your choice, extra courses and self-learning.\n\n* A young, multidisciplinary and dynamic team in a medium sized scale-up (~30 employees), with a rock-solid, subscription based business model in IoT and Data Analytics.\n\n* A large collection of perks including a smartphone, laptop of your choice, an extra healthcare insurance, transport card and (depending on need) company car.\n\n* An open culture and nurture creativity, while keeping our clients and the rest of the team in mind at all times.\n\n* A balanced work environment (work from home, flexible working hours, no meetings, no emails).\n\n* Meal vouchers.\n\n\n\n\nHow to apply\nPlease apply via the online application form and carefully fill in the 3 write-up questions to demonstrate that you are a good English written communicator and experienced JavaScript/ReactJS programmer. We will review your written submission within 2 weeks and let you know if you are invited to an interview. The recruiting process might also include an exercise down the line.\nAgency calls are not appreciated.\nPI126504447 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Front End, Developer, Digital Nomad, English, JavaScript, Cloud, Python, API, SaaS and Apache jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Visual Lease and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nVisual Lease is looking for a talented, hands-on, Senior Database Engineer to join our engineering organization. In this role, you’ll work with our cross-functional Engineering team, and our industry experts to design and build powerful, yet easy to use web-based SaaS applications. You will follow and promote the latest technology patterns and best practices. You’ll also directly support our move to a microservices architecture. This change, along with our move to Amazon Web Services (AWS), will involve a schema redesign and a move from MS SQL Server to Aurora PostgreSQL. \n\n\nThe Senior DB Engineer candidate is a well-studied, influential technologist. He/she has C# and/or Python development skills with STRONG backend experience. The ideal candidate will have the proven ability to learn complex application business logic and design needed database structures. This exciting role is positioned to make an enormous impact on our technology roadmap, and have a strong voice and influence within the engineering team.\n\n\n\n\nWhat You Will Do\n\n\n\n\n\n* Acts as our database subject matter expert and develops the vision for our database architecture\n\n* Responsible for the performance, scalability and security of database architecture\n\n* Documents our current and future relational database structure, making changes needed to best support a microservices architecture and leveraging AWS services\n\n* Develops migration tools and oversees the migration of data from MS SQL Server to Aurora PostgreSQL\n\n* Provides key input on schema design for future reporting considerations\n\n* Provides hands-on support to both: development teams with database needs for new features, and operations teams with database scripts and utilities\n\n* Mentors engineering team members around modern database architecture principles, and lead by example with developing optimal relational database structures and queries\n\n* Works with the Operations team to monitor, analyze and tune application performance\n\n* Lends support for database-related incident tickets for investigation and resolution\n\n* May develop microservices in C# or Python to interface with database ORM\n\n* Follows software engineering standards and best practices. Perform code reviews\n\n\n\n\n\n\n\nSkills & Competencies\n\n\n\n\n\n* 6+ years’ experience with database design, development and maintenance in a Cloud SaaS environment\n\n* 2+ years’ experience with Postgres (Aurora Postgres preferred) in production\n\n* Experience with large-scale data migration projects\n\n* Experience with designing high performing reporting schemas\n\n* Experience with database design in a microservices architecture (without functions and stored procedures)\n\n* Experience implementing data solutions with AWS is highly desired\n\n* Knowledge of multiple embedded or cloud-based analytics tools is highly desired, such as: MicroStrategy or TIBCO Jaspersoft\n\n* Experience with data warehousing, data modeling, and multidimensional data models is helpful\n\n* A solid foundation in data structures, algorithms, and OO Design with fundamentally strong programming skills in Python, C#/.net\n\n* Bachelor’s or advanced degree in Computer Science or related field with 6+ years of engineering experience\n\n* Self-starter with excellent communication skills, influencing, and follow-through skills\n\n* Intellectual curiosity and an ability to collaborative support multiple Scrum teams in a fast-paced, iterative product development environment\n\n* Please note that visa sponsorship is not available for this position.\n\n\n\n\n\n\n\n\nWhy Work at Visual Lease?\n\n\n\n\nWe have a passion for simplifying the complex. Our lease accounting and lease administration software helps companies manage, analyze and report on their leased asset portfolios. Loved by more than 700 major companies worldwide, our cloud-based SaaS platform embeds decades of lease management and financial accounting expertise. \n\n\nOver the last two years, our employee base has seen significant growth. Voted one of NJ’s Best Places to Work, we are looking for driven, innovative and passionate team-players to help us continue this journey and bring us to the next level.\n\n\nVisual Lease is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status or other characteristics protected by law. Visual Lease is a background screening, drug-free workplace. Full-time positions offer a competitive benefits package and salary commensurate with experience.\n\n\nwww.visuallease.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Engineer, Accounting, Amazon, Cloud, Python, SaaS and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Modus Create and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nHello! Are you ready to Work from Home and transform your career?\n\nWe're looking for an experienced and enthusiastic Cloud Data Engineer to join the engineering team at Modus. Want to help our client’s build awesome solutions to accomplish their goals and vision? Are you interested in working from home with some of the best talent on the planet? Then keep reading.\n\nAbout You\n\nYou love optimization and automation across multiple platforms and applications. Driving consistency for deployment and build processes is second nature for you. If you walk the line between software development and IT operations, we want to speak to you! We use modern tools, which means you’ll have the opportunity to work with software like Jenkins, Docker, Kubernetes, Amazon Web Services, Ansible, Terraform and much more. \n\nYou have 3+ years of experience and demonstrated proficiency in database development supporting complex information technology systems. You are familiar and comfortable working with enterprise applications, in an agile environment. You are an expert at implementing cloud migrations to AWS and have working knowledge of cloud platforms and tools, including AWS serverless technologies.\n\nYou should have a strong understanding of AWS Products, including: Lake Formation, Glue & Glue Data Catalog, S3, Cloud Development Kit and AWS Kinesis. In addition to AWS technologies, you should understand batch and stream processing architectures as they feed a data lake. Exposure to different file formats used in data lakes would be beneficial.\n\nLinux is required, one of the following: Ubuntu 14.04 LTS or higher, Red Hat Enterprise Linux / CentOS 6 or higher, Fedora 25 or higher, Debian 8.0 or higher.\n\nExperience in bash, Java, JavaScript or Python would be nice to have. Additional supporting skills include databases and SaaS providers, including: Mongo DB, Elastic Search, Postgres, Oracle, MySQL, Heap & Google Analytics and Salesforce.\n\nYou love learning. Engineering is an ever-evolving world. You enjoy playing with new tech and exploring areas that you might not have experience with yet. You are self-driven, self-learner willing to share knowledge and participate actively in your community.\n\nHaving overlap with your team is critical when working in a global remote team. Modus requires all team members to overlap with EST morning hours daily. In addition, reliable high speed internet is a must and a personal development environment running Mac OSX, Windows 10Pro, or Linux.\n\nAdditionally, as part of this role you will be expected to adhere to our customers information security requirements. This includes but is not limited to installing BitDefender endpoint security, provided by Modus Create, on the machine you will be connecting to the client’s network and systems with. BitDefender provides the following features: AntiVirus/AntiMalware, Intrusion detection systems, Blocking of phishing/compromised websites, full disk encryption (using native tools to the operating system), disable external storage, automatic patch deployment and Windows risk management.\n\nThings You Might Do\n\nModus is a fast-growing, and remote-first company, so you'll likely get experience on many different projects across the organization. That said, here are some things you'll probably do:\n\n\n* Give back to the community via open source and blog posts\n\n* Travel and meet great people- as part of our remote-first lifestyle, it's important that we come together as needed to work together, meet each other in person and have fun together. Please keep that in mind when you apply.\n\n* Teach and be taught: Modus creates active teams that work in internal and external projects together, giving opportunities to stay relevant with the latest technologies and learning from experts worldwide\n\n* Interact directly with internal and external clients to represent Modus and its value\n\n\n\n\nWhy Modus Create:\n\nOur Benefits may vary according to the Country you are located in, so please reach out to our recruiter in case you have any questions.\n\nIf you become a contractor we offer:\n\n\n* Competitive compensation\n\n* 100% Remote work (could vary according to the client's needs)\n\n* Travel according to client's needs\n\n* The chance to work side-by-side with thought leaders in emerging tech\n\n\n\n\nDo you have what it takes? Apply today!\n\nAbout Modus Create\n\nModus Create is a digital product agency that accelerates digital transformation. We use high performing small teams, emerging technology, and “new school” product development tools and methods to accelerate business outcomes. We support our clients across four core delivery areas: business and product strategy consulting, customer experience, cloud services, and Agile software delivery. \n\nDriven by a team of world-class talent, we have been recognized by the Inc. 5000 list of fastest growing private companies 5 years in a row, the Washington Business Journal list of Fastest Growing Companies in the Washington, DC area two years in a row, and a top company for remote work by FlexJobs. We’re also an official partner to Atlassian, AWS, Cloudflare, GitHub, InVision, Ionic Framework, and Vue.js!\n\nBased on the model of an open source team, Modites work remotely, and are located across the globe. That’s allowed us to hire the best talent in the world, no matter where they live. Our highly collaborative, autonomous, and effective working environment is fueled by a team unified by a love of continuous learning. Our years of thought leadership including books, whitepapers, blog posts, conference and MeetUp talks, demonstrate our commitment to sharing what we’ve learned. \n\nWe encourage every Modus employee to do the same. Our company is a platform for the growth of our employees. Through working with our distributed team of experts on challenging projects, every person that joins the Modus team can expect to continue growing and learning every day. This is your chance to be part of building something great. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, JavaScript, Amazon, Serverless, Python, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Modus Create and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nHello! Are you ready to Work from Home and transform your career?\n\nWe're looking for an experienced and enthusiastic Senior DevOps Engineer to join the engineering team at Modus. Want to help our client’s build awesome solutions to accomplish their goals and vision? Are you interested in working from home with some of the best talent on the planet? Then keep reading.\n\nAbout You\n\nYou love optimization and automation across multiple platforms and applications. Driving consistency for deployment and build processes is second nature for you. If you walk the line between software development and IT operations, we want to speak to you! We use modern tools, which means you’ll have the opportunity to work with software like Jenkins, Docker, Kubernetes, Amazon Web Services, Ansible, Terraform and much more. \n\nYou have the ability to develop Infrastructure as Code (laC) with tools like Terraform or CloudFormation. You are familiar and comfortable working with enterprise applications, in an agile environment. You are an expert at implementing cloud migrations to AWS and have working knowledge of cloud platforms and tools, including AWS serverless technologies.\n\nYou should have experience building CI systems. You are familiar with Ruby QA frameworks such as Rspec and Capybara. You have working experience with Site Reliability Engineering.\n\nExperience in Bash, Java, JavaScript or Python would be nice to have. Additional supporting skills include databases and SaaS providers, including: Mongo DB/Atlas/DocumentDB, DynamoDB, Elastic Search, Postgres, Oracle, MySQL, Heap & Google Analytics and Salesforce.\n\nYou love learning. Engineering is an ever-evolving world. You enjoy playing with new tech and exploring areas that you might not have experience with yet. You are self-driven, self-learner willing to share knowledge and participate actively in your community.\n\nHaving overlap with your team is critical when working in a global remote team. Modus requires all team members to overlap with EST morning hours daily. In addition, reliable high-speed internet is a must.\n\nThings You Might Do\n\nModus is a fast-growing, and remote-first company, so you'll likely get experience on many different projects across the organization. That said, here are some things you'll probably do:\n\n\n* Give back to the community via open source and blog posts\n\n* Teach and be taught: Modus creates active teams that work in internal and external projects together, giving opportunities to stay relevant with the latest technologies and learning from experts worldwide\n\n* Interact directly with internal and external clients to represent Modus and its value\n\n\n\n\nWhy Modus Create:\n\nOur Benefits may vary according to the Country you are located in, so please reach out to our recruiter in case you have any questions.\n\nIf you become a contractor we offer:\n\n\n* Competitive compensation\n\n* 100% Remote work (could vary according to the client's needs)\n\n* Travel according to client's needs\n\n* The chance to work side-by-side with thought leaders in emerging tech\n\n\n\n\nDo you have what it takes? Apply today!\n\nAbout Modus Create\n\nModus Create is a digital product agency that accelerates digital transformation. We use high performing small teams, emerging technology, and “new school” product development tools and methods to accelerate business outcomes. We support our clients across four core delivery areas: business and product strategy consulting, customer experience, cloud services, and Agile software delivery. \n\nDriven by a team of world-class talent, we have been recognized by the Inc. 5000 list of fastest growing private companies 5 years in a row, the Washington Business Journal list of Fastest Growing Companies in the Washington, DC area two years in a row, and a top company for remote work by FlexJobs. We’re also an official partner to Atlassian, AWS, Cloudflare, GitHub, InVision, Ionic Framework, and Vue.js!\n\nBased on the model of an open source team, Modites work remotely, and are located across the globe. That’s allowed us to hire the best talent in the world, no matter where they live. Our highly collaborative, autonomous, and effective working environment is fueled by a team unified by a love of continuous learning. Our years of thought leadership including books, whitepapers, blog posts, conference and MeetUp talks, demonstrate our commitment to sharing what we’ve learned. \n\nWe encourage every Modus employee to do the same. Our company is a platform for the growth of our employees. Through working with our distributed team of experts on challenging projects, every person that joins the Modus team can expect to continue growing and learning every day. This is your chance to be part of building something great. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Engineer, JavaScript, Amazon, Serverless, Cloud, Python, Ruby and SaaS jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Modus Create and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nHello! Are you ready to Work from Home and transform your career?\n\nWe're looking for an enthusiastic DevOps Engineer to join the engineering team at Modus. Want to help our client’s build awesome solutions to accomplish their goals and vision? Are you interested in working from home with some of the best talent on the planet? Then keep reading.\n\nAbout You\n\nYou love optimization and automation across multiple platforms and applications. Driving consistency for deployment and build processes is second nature for you. If you walk the line between software development and IT operations, we want to speak to you! We use modern tools, which means you’ll have the opportunity to work with software like Jenkins, Docker, Kubernetes, Amazon Web Services, Ansible, Terraform and much more. \n\nYou have the ability to develop Infrastructure as Code (laC) with tools like Terraform or CloudFormation. You are familiar and comfortable working with enterprise applications, in an agile environment. You are an expert at implementing cloud migrations to AWS and have working knowledge of cloud platforms and tools, including AWS serverless technologies. Must have working knowledge of handling Disaster Recovery and Business Continuity in AWS.\n\nYou should have experience building CI/CD pipelines with tools like Jenkins, CircleCI, or TravisCI. It would be nice if you had experience automating account creation or Identity and Access enrollment, Centralizing logs, Standardizing CI/CD pipelines and migrations from Azure AD.\n\nAn up to date Linux workstation or Mac is required, with advanced knowledge.\n\nExperience in Bash, Java, JavaScript or Python would be nice to have. Additional supporting skills include databases and SaaS providers, including: Mongo DB/Atlas/DocumentDB, DynamoDB, Elastic Search, Postgres, Oracle, MySQL, Heap & Google Analytics and Salesforce.\n\nYou love learning. Engineering is an ever-evolving world. You enjoy playing with new tech and exploring areas that you might not have experience with yet. You are self-driven, self-learner willing to share knowledge and participate actively in your community.\n\nHaving overlap with your team is critical when working in a global remote team. Modus requires all team members to overlap 100% with EST hours daily. In addition, reliable high speed internet is a must and a personal development environment running Mac OSX, Windows 10Pro, or Linux.\n\nAdditionally, as part of this role you will be expected to adhere to our customers information security requirements. This includes but is not limited to installing BitDefender endpoint security, provided by Modus Create, on the machine you will be connecting to the client’s network and systems with. BitDefender provides the following features: AntiVirus/AntiMalware, Intrusion detection systems, Blocking of phishing/compromised websites, full disk encryption (using native tools to the operating system), disable external storage, automatic patch deployment and Windows risk management.\n\nThings You Might Do\n\nModus is a fast-growing, and remote-first company, so you'll likely get experience on many different projects across the organization. That said, here are some things you'll probably do:\n\n\n* Give back to the community via open source and blog posts\n\n* Teach and be taught: Modus creates active teams that work in internal and external projects together, giving opportunities to stay relevant with the latest technologies and learning from experts worldwide\n\n* Interact directly with internal and external clients to represent Modus and its value\n\n\n\n\nWhy Modus Create:\n\nOur Benefits may vary according to the Country you are located in, so please reach out to our recruiter in case you have any questions.\n\nIf you become a contractor we offer:\n\n\n* Competitive compensation\n\n* 100% Remote work (could vary according to the client's needs)\n\n* Travel according to client's needs\n\n* The chance to work side-by-side with thought leaders in emerging tech\n\n\n\n\nDo you have what it takes? Apply today! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Engineer, JavaScript, Amazon, Serverless, Cloud, Python, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for LoopVOC and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nLoopVOC is looking for a talented and motivated platform engineer who can hit the ground running and support taking our NLP product to the next level under a 100% remote work environment. Our software is designed to revolutionize the way SaaS companies collect, analyze, and respond to feedback from their customers, by combining text analytics with a simple user experience. We're seeking someone to help us expand and scale our platform. This position will have software and infrastructure responsibilities, from building new features and integrations to making fundamental architecture choices to facilitating continuous integration and deployment across the company. If this excites you, we’d love to hear from you!\n\nIdeal candidate\n\n\n* You embrace and live a growth mindset\n\n* You work well independently, and with others and efficiently\n\n* You’ve used a combination of Go, Java, C, or Python to create fast, maintainable production software.\n\n* You know what it takes to scale a SaaS application deployed in a cloud environment\n\n* You're a creative problem solver\n\n* You’re a pro at designing and consuming RESTful APIs.\n\n* Familiarity with, if not an avid engineer, of working under a test-driven development process\n\n\n\n\nExtra Points:\n\n\n* You’ve worked in the B2B SaaS or analytics space.\n\n* You’ve built tools with machine learning and natural language processing.\n\n* You know the following about Go right now without searching the internet: What is a channel and a go routine and how do they relate, What is an interface and why you should use them, How does a slice allocate memory as you append items to it, When a go binary is run what is the order of functions that are called in which packages.\n\n* You understand containerization of all your deployments in Docker and best practices for auto-scaling and load balancing your production environment in Kubernetes\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Engineer, Cloud, Python and SaaS jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for YouGov and want to re-open this job? Use the edit link in the email when you posted the job!
\nCrunch.i, part of the YouGov PLC is a market-defining company in the analytics SaaS marketplace. We’re a company on the rise. We’ve built a revolutionary platform that transforms our customers’ ability to drive insight from market research and survey data. We offer a complete survey data analysis platform that allows market researchers, analysts, and marketers to collaborate in a secure, cloud-based environment, using a simple, intuitive drag-and-drop interface to prepare, analyze, visualize and deliver survey data and analysis. Quite simply, Crunch provides the quickest and easiest way for anyone, from CMO to PhD, with zero training, to analyze survey data. Users create tables, charts, graphs and maps. They filter, and slice-and-dice survey data directly in their browser.\n\nOur start-up culture is casual, respectful of each other’s varied backgrounds and lives, and high-energy because of our shared dedication to our product and our mission. We are loyal to each other and our company. We value work/life balance, efficiency, simplicity, and fantastic customer service! Crunch has no offices and fully embraces a 100% remote culture. We have 40 employees spread across 5 continents. Remote work at Crunch is flexible and largely independent, yet highly cooperative.\n\nWe are hiring a DevOps Lead to help expand our platform and operations excellence. We are inviting you to join our small, fully remote team of developers and operators helping make our platform faster, more secure, and more reliable. You will be self-motivated and disciplined in order to work with our fully distributed team.\n\nWe are looking for someone who is a quick study, who is eager to learn and grow with us, and who has experience in DevOps and Agile cultures. At Crunch, we believe in learning together: we recognize that we don’t have all the answers, and we try to ask each other the right questions. As Crunch employees are completely distributed, it’s crucial that you can work well independently, and keep yourself motivated and focused.\n\nOur Stack:\n\nWe currently run our in-house production Python code against Redis, MongoDB, and ElasticSearch services. We proxy API requests through NGINX, load balance with ELBs, and deploy our React web application to AWS CloudFront CDN. Our current CI/CD process is built around GitHub, Jenkins, BlueOcean including unit, integration, and end to end tests and automated system deployments. We deploy to auto-scaling Groups using Ansible and Cloud-Init.\n\nIn the future, all or part of our platform may be deployed via DroneCI, Kubernetes, nginx ingress, Helm, and Spinnaker.\n\nWhat you'll do:\n\nAs a Leader:\n\n\n* Manage and lead a team of Cloud Operations Engineers who are tasked with ensuring our uptime guarantees to our customer base.\n\n* Scale the worldwide Cloud Operations Engineering team with the strategic implementation of new processes and tools.\n\n* Hire and ramp exceptional Cloud Operations Engineers.\n\n* Assist in scoping, designing and deploying systems that reduce Mean Time to Resolve for customer incidents.\n\n* Inform executive leadership and escalation management personnel of major outages\n\n* Compile and report KPIs across the full company.\n\n* Work with Sales Engineers to complete pre-sales questionnaires, and to gather customer use metrics.\n\n* Prioritize projects competing for human and computational resources to achieve organizational goals.\n\n\n\n\nAs an Engineer:\n\n\n* Monitor and detect emerging customer-facing incidents on the Crunch platform; assist in their proactive resolution, and work to prevent them from occurring.\n\n* Coordinate and participate in a weekly on-call rotation, where you will handle short term customer incidents (from direct surveillance or through alerts via our Technical Services Engineers).\n\n* Diagnose live incidents, differentiate between platform issues versus usage issues across the entire stack; hardware, software, application and network within physical datacenter and cloud-based environments, and take the first steps towards resolution.\n\n* Automate routine monitoring and troubleshooting tasks.\n\n* Cooperate with our product management and engineering organizations by identifying areas for improvement in the management of applications powering the Crunch infrastructure.\n\n* Provide consistent, high-quality feedback and recommendations to our product managers and development teams regarding product defects or recurring performance issues.\n\n* Be the owner of our platform. This includes everything from our cloud provider implementation to how we build, deploy and instrument our systems.\n\n* Drive improvements and advancements to the platform in areas such as container orchestration, service mesh, request/retry strategies.\n\n* Build frameworks and tools to empower safe, developer-led changes, automate the manual steps and provide insight into our complex system.\n\n* Work directly with software engineering and infrastructure leadership to enhance the performance, scalability and observability of resources of multiple applications and ensure that production hand off requirements are met and escalate issues.\n\n* Embed into SRE projects to stay close to the operational workflows and issues.\n\n* Evangelize the adoption of best practices in relation to performance and reliability across the organization.\n\n* Provide a solid operational foundation for building and maintaining successful SRE teams and processes.\n\n* Maintain project and operational workload statistics.\n\n* Promote a healthy and functional work environment.\n\n* Work with Security experts to do periodic penetration testing, and drive resolution for any issues discovered.\n\n* Liaise with IT and Security Team Leads to successfully complete cross-team projects, filling in for these Leads when necessary.\n\n* Administer a large portfolio of SaaS tools used throughout the company.\n\n\n\n\nQualifications:\n\n\n* Team Lead experience of an on-call DevOps, SRE, or Cloud Operations team (at least 2 years).\n\n* Experience recruiting, mentoring, and promoting high performing team members.\n\n* Experience being an on-call DevOps, SRE, or Cloud Operations engineer (at least 2 years).\n\n* Proven track record of designing, building, sizing, optimizing, and maintaining cloud infrastructure.\n\n* Proven experience developing software, CI/CD pipelines, automation, and managing production infrastructure in AWS.\n\n* Proven track record of designing, implementing, and maintaining full CI/CD pipelines in a cloud environment (Jenkins experience preferred).\n\n* Experience with containers and container orchestration tools (Docker, Kubernetes, Helm, traefik, Nginx ingress and Spinnaker experience preferred).\n\n* Expertise with Linux system administration (5 yrs) and networking technologies including IPv6.\n\n* Knowledgeable about a wide range of web and internet technologies.\n\n* Knowledge of NoSQL database operations and concepts.\n\n* Experience in monitoring, system performance data collection and analysis, and reporting.\n\n* Capability to write small programs/scripts to solve both short-term systems problems and to automate repetitive workflows (Python and Bash preferred).\n\n* Exceptional English communication and troubleshooting skills.\n\n* A keen interest in learning new things.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Executive, React, English, Elasticsearch, Cloud, NoSQL, Python, API, Sales, SaaS, Engineer, Nginx and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Osmosis and want to re-open this job? Use the edit link in the email when you posted the job!
\nOverview\n\nOur rapidly scaling medical education technology company is seeking a passionate Data Analyst to join our team. In this role, you will be working with the product, marketing, content, and operations teams to help us build the most joyful, effective learning platform in the medical education space.\n\nThis is a hands-on position in a unique remote, start-up environment, so we are looking for a candidate who is not afraid to roll up their sleeves to do the work and to collaborate with team members across the organization.\n\nAbout Osmosis\n\nOur mission is to “Empower the world’s clinicians & caregivers with the best learning experience possible.” To this end, we have an audience of more than a million current & future clinicians as well as patients and family members. Our members of the Osmosis learning platform and video library use the product to learn efficiently & excel in classes, board exams, and in the clinic. \n\nWe are a team of creative, approachable, and driven entrepreneurs, researchers, and clinicians who are passionate about improving healthcare and education. At Osmosis, we collaborate remotely and value highly-motivated problem solvers who manage their time efficiently, communicate earnestly, work effectively, and understand the importance of life-work balance. We do everything we can to make sure our teammates are successful personally and professionally.\n\nAbout the Role\n\nAs a Data Analyst, you will turn data into information, information into insight and insight into business decisions. You’ll develop analysis and reporting capabilities as well as monitor performance and quality control plans to identify improvements. Your primary responsibility will be to work with stakeholders across business functions including product, growth, marketing, and operations to build analytics products that enable data-driven decision making. You will guide analytics projects from discovery to solution and help us raise the bar for how we should apply our data to business decisions. In this role, you will be expected to: \n\n\n* Complete analysis projects with business stakeholders to monitor the health of the business and help the business make data-driven strategic, product, and operational decisions\n\n* Develop and own business intelligence dashboards, visualizations, and reports to provide ongoing tracking and insights to the team\n\n* Build and improve advanced analytical models for product and business use cases\n\n* Collaborate with data engineer to improve data architecture and maintain a robust and accurate data warehouse\n\n* Acquire data from primary or secondary data sources and maintain databases/data systems\n\n* Identify, analyze, and interpret trends or patterns in complex data sets\n\n* Discover ways to use analytics to support team members across the business to yield action through data-driven decision making\n\n\n\n\nQualifications\n\n\n* 2-4 years experience managing data analysis projects. eCommerce or SaaS experience is a plus.\n\n* Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.\n\n* Excel at composing concise efficient SQL queries, writing reports and presenting findings with data visualization tools.\n\n* Experience using a cloud data warehouse environment (such as BigQuery)\n\n* Experience with Python and pandas is a plus.\n\n* Working knowledge of business statistics and probability.\n\n* Ability to build trust and communicate insights effectively with a variety of business stakeholders across analytical levels.\n\n* Desire to be a partner to business stakeholders with a shared goal of using analytics and insights to drive the business forward.\n\n* Communicator. Excellent communication skills and a willingness to give and receive feedback.\n\n* Driven. Proactive and self-driven problem-solving with sharp attention to detail.\n\n* Iterative. You deliver results quickly with iteration, instead of waiting for perfection. \n\n* Adaptable. You are flexible and versatile with projects, goals, and strategies. You move quickly with change and stay open-minded\n\n* Entrepreneurial. You are a proven executor and work with urgency to produce excellent results with limited time and resources\n\n* Lifelong learner. You are actively consuming content (podcasts, blogs, books, etc) and applying these learnings in your work to make sure you are as effective as possible. \n\n* Passion for Osmosis’s mission to provide your future clinicians the best education so they can provide you and your loved ones the best care.\n\n\n\n\nOsmosis is an equal opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status or other status protected by law.\n\nTo apply, please submit the following to [email protected]: \n\n\n* Resume\n\n* Portfolio of any relevant work\n\n* Answers to the following questions (50 words or less for each question):\n\n\n\n\n* What was an interesting data problem you worked on within the last year? How did you identify and address it?\n\n* Describe a situation where you did not have access to all of the data needed to triage a problem or analyze a situation, and how you adapted to it.\n\n* Based solely on what you see on osmosis.org, how would you measure customer lifetime value for Osmosis?\n\n\n\nIncomplete applications will not be considered. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Data Science, Analyst, Video, Education, Cloud, Python, Excel, SaaS, Medical, Engineer and Ecommerce jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for SaaS Analytics Startup and want to re-open this job? Use the edit link in the email when you posted the job!
\nLoopVOC is looking for a talented and motivated backend engineer who can hit the ground running and take our product to the next level. Our software is designed to revolutionize the way SaaS companies collect, analyze, and respond to feedback from their customers, by combining text analytics with a simple user experience. We're seeking someone to help us expand and scale our platform as an early partner in the engineering team. This position will have software and infrastructure responsibilities, from building new features and integrations to making fundamental architecture choices to optimizing machine learning and natural language processing.\n\nWe are an analytics startup. Data is at the heart of every decision we make, and you’ll be enabling our customers to use data in new and innovative ways. We are innately curious, radically transparent, and obsessed with feedback. We set aggressive goals and push ourselves to constantly evolve. We like to go after big ideas, fast… and are looking for someone that likes to do the same.\n\nOur developers can live and work anywhere. We offer competitive salaries, unlimited vacation, and flexible hours. You’ll have the chance to earn equity in a fast-growing startup, work with cutting-edge technology, and build solutions for the top SaaS companies in the world. If you want to look back on your career and know that you were a vital part of building an awesome company, this role is for you.\n\nMust Haves:\n\n\n* You’ve used Go, Java, C, or Python to create fast, maintainable production software.\n\n* You know what it takes to scale a SaaS application deployed in a cloud environment.\n\n* You embrace obstacles & are energized by new challenges with unproven solutions.\n\n* You’re a pro at designing and consuming RESTful APIs\n\n* You take ownership over project timelines & deliverables.\n\n\n\n\nExtra Points:\n\n\n* You’ve worked in the B2B SaaS or analytics space.\n\n* You’ve built tools with machine learning and natural language processing.\n\n* You understand containerization of all your deployments in Docker and best practices for auto-scaling and load balancing your production environment in Kubernetes\n\n* You write automatic tests as a part of your development process and are part of a team that pushes code to production every day with tools for continuous integration and continuous deployment that you configure and administrate.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Backend, Cloud, Python, Stats and SaaS jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for LoopVOC and want to re-open this job? Use the edit link in the email when you posted the job!
\nLoopVOC is looking for a talented and motivated backend engineer who can hit the ground running and take our product to the next level. Our software is designed to revolutionize the way SaaS companies collect, analyze, and respond to feedback from their customers, by combining text analytics with a simple user experience. We're seeking someone to help us scale our platform, from building new features and integrations to optimizing architecture and design to facilitating continuous integration and deployment.\n\nWe are an analytics startup. Data is at the heart of every decision we make, and you’ll be enabling our customers to use data in new and innovative ways. We are innately curious, radically transparent, and obsessed with feedback. We set aggressive goals and push ourselves to constantly evolve. We like to go after big ideas, fast… and challenge each other to do the impossible.\n\nOur developers can live and work anywhere. We offer competitive salaries, unlimited vacation, and flexible hours. You’ll have the chance to earn equity in a fast-growing startup, work with cutting-edge technology, and build solutions for the top SaaS companies in the world. If you want to look back on your career and know that you were a vital part of building an awesome company, this role is for you.\n\nMust Haves:\n\n\n* You’ve used Go, Java, C, or Python to create fast, maintainable production software.\n\n* You know what it takes to scale a SaaS application deployed in a cloud environment.\n\n* You embrace obstacles & are energized by new challenges with unproven solutions.\n\n* You’re a pro at designing and creating RESTful APIs for consumption.\n\n* You take ownership over project timelines & deliverables.\n\n\n\n\n Extra Points:\n\n\n* You’ve worked in the B2B SaaS or analytics space.\n\n* You have experience consuming APIs from major SaaS providers.\n\n* You’ve built tools with machine learning and natural language processing.\n\n* You’ve rewritten SQL queries to speed them up and you know how to optimize a database server’s overall performance.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Golang, Engineer, Backend, Cloud, Python and SaaS jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.\n\nElastic’s Cloud product allows users to build new clusters or expand existing ones easily. This product is built on Docker-based orchestration system to easily deploy and manage multiple Elastic Clusters.\n\nWhat You Will Do:\n\n\n* Be the owner of data quality for our KPI and analytics data. Ensure that we're getting the data we need and that it's as error-free as possible.\n\n* Troubleshoot problems with data and reports.\n\n* Work cross-functionally with product managers, analysts, and engineering teams to extract meaningful data from multiple sources\n\n* Develop test strategies, create test plans, execute test cases both manually and then create automation to reduce regressions\n\n* Be the point person for making sure that the raw data we use for our KPIs flows into our reporting systems.\n\n* Test Elasticsearch models, queries and Kibana Visualizations. Use outlier detection and statistical methods to define and monitor valid data ranges\n\n* Understand the Cloud business model and work with analysts to build presentations for executives and product owners\n\n* Generate reports and/or data dumps based on requirements provided by stakeholders. Contribute to automation of these reports.\n\n* Grow and share your interest in technical outreach (blog posts, tech papers, conference speaking, etc.)\n\n\n\n\nWhat You Bring Along\n\n\n* You are passionate about software that delivers quality data to stakeholders\n\n* Experience testing models for SaaS KPIs such as user churn, trial conversion, etc.\n\n* Experience with scripting languages such as Python, Ruby, Bash\n\n* Experience with Jupyter and Python libraries like Pandas and Numpy is a plus\n\n* Ability to write queries for Elasticsearch and relational data stores such as Postgres DB\n\n* Experience creating test plans for complex data analysis\n\n* Basic understanding of statistics and data modeling\n\n* A self-starter who has experience working across multiple technical teams and decision, makers\n\n* You love working with a diverse, worldwide team in a distributed work environment\n\n\n\n\nAdditional Information:\n\n\n* Competitive pay and benefits\n\n* Equity\n\n* Catered lunches, snacks, and beverages in most offices\n\n* An environment in which you can balance great work with a great life\n\n* Passionate people building great products\n\n* Employees with a wide variety of interests\n\n* Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, Elasticsearch, Python, SaaS and Apache jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Selerity and want to re-open this job? Use the edit link in the email when you posted the job!
\nSummary:\n\nWe are looking for a Senior DevOps (Site Reliability) Engineer to join Selerity’s team, scaling up an A.I. driven analytics and recommendation platform and integrating it into enterprise workflows. Highly competitive compensation plus significant opportunities for professional growth and career advancement.\n\nEmployment Type: Contract or Full-time\n\nLocation is flexible: We have offices in New York City and Oak Park, Illinois (Chicago suburb) but about half of our team currently works remotely from various parts of Europe, North America, and Asia. \n\n\nJob Description:\n\nWant to change how the world engages with chat, research, social media, news, and data?\n\nSelerity has dominated ultra-low-latency data science in finance for almost a decade. Now our real-time content analytics and contextual recommendation platform is gaining broader traction in enterprise and media applications. We're tackling big challenges in predictive analytics, conversational interfaces, and workflow automation and need your help!\n\nWe’re looking for an experienced DevOps (Site Reliability) Engineer to join a major initiative at a critical point in our company’s growth. The majority of Selerity’s applications are developed in Java and C++ on Linux but knowledge of other languages (especially Python and JavaScript), platforms and levels of the stack is very helpful.\n\n\n\nMust-haves:\n\n * Possess a rock-solid background in Computer Science (minimum BS in Comp Sci or related field) + at least 5 years (ideally 10+) of challenging work experience.\n\n * Implementation of DevOps / SRE processes at scale including continuous integration (preferred: Jenkins), automated testing, and platform monitoring (preferred: JMX, Icinga, Grafana, Graphite).\n\n * Demonstrated proficiency building and modifying Java and C++ applications in Linux environments (using Git, SVN). \n\n * Significant operations expertise with the Ansible (preferred), Chef, or Puppet deployment automation system in a Cloud environment.\n\n * Direct experience in the design, implementation, and maintenance of SaaS APIs that are minimal, efficient, scalable, and supportable throughout their lifecycle (OpenLDAP).\n\n * Solid track record of making effective design decisions balancing near-term and long-term objectives.\n\n * Know when to use commercial or open-source solutions, when to delegate to a teammate, and when to roll up your sleeves and code it yourself.\n\n * Work effectively in agile teams with remote members; get stuff done with minimal guidance and zero BS, help others, and know when to ask for help.\n\n * Clearly communicate complex technical and product issues to non-technical team members, managers, clients, etc. \n\n\n\nNice-to-haves:\n\n * Proficiency with Cisco, Juniper, and other major network hardware platforms, as well as ISO layer 1 and 2 protocols.\n\n * Experience with Internet routing protocols such as BGP.\n\n * Implementation of software defined networking or other non-traditional networking paradigms.\n\n * Proficiency with SSL, TLS, PGP, and other standard crypto protocols and systems.\n\n * Full-stack development and operations experience with web apps on Node.js.\n\n * Experience with analytics visualization libraries.\n\n * Experience with large-scale analytics and machine learning technologies including TensorFlow/Sonnet, Torch, Caffe, Spark, Hadoop, cuDNN, etc.\n\n * Conversant with relational, column, object, and graph database fundamentals and strong practical experience in any of those paradigms.\n\n * Deep understanding of how to build software agents and conversational workflows.\n\n * Experience with additional modern programming languages (Python, Scala, …)\n\n\n\nOur stack:\n\n * Java, C++, Python, JavaScript/ECMAscript + Node, Angular, RequireJS, Electron, Scala, etc.\n\n * A variety of open source and in-house frameworks for natural language processing and machine learning including artificial neural networks / deep learning.\n\n * Hybrid of AWS (EC2, S3, RDS, R53) + dedicated datacenter network, server and GPU/coprocessor infrastructure.\n\n * Cassandra, Aurora plus in-house streaming analytics pipeline (similar to Apache Flink) and indexing/query engine (similar to ElasticSearch).\n\n * In-house messaging frameworks for low-latency (sub-microsecond sensitivity) multicast and global-scale TCP (similarities to protobufs/FixFast/zeromq/itch).\n\n * Ansible, Git, Subversion, PagerDuty, Icinga, Grafana, Observium, LDAP, Jenkins, Maven, Purify, VisualVM, Wireshark, Eclipse, Intellij.\n\nThis position offers a great opportunity to work with advanced technologies, collaborate with a top-notch, global team, and disrupt a highly visible, multi-billion-dollar market. \n\n\n\nCompensation:\n\nWe understand how to attract and retain the best talent and offer a competitive mix of salary, benefits and equity. We also understand how important it is for you to feel challenged, to have opportunities to learn new things, to have the flexibility to balance your work and personal life, and to know that your work has impact in the real world.\n\nWe have team members on four continents and we're adept at making remote workers feel like part of the team. If you join our NYC main office be sure to bring your Nerf toys, your drones and your maker gear - we’re into that stuff, too.\n\n\nInterview Process:\n\nIf you can see yourself at Selerity, send your resume and/or online profile (e.g. LinkedIn) to [email protected]. We’ll arrange a short introductory phone call and if it sounds like there’s a match we'll arrange for you to meet the team for a full interview. \n\nThe interview process lasts several hours and is sometimes split across two days on site, or about two weeks with remote interviews. It is intended to be challenging - but the developers you meet and the topics you’ll be asked to explain (and code!) should give you a clear sense of what it would be like to work at Selerity. \n\nWe value different perspectives and have built a team that reflects that diversity while maintaining the highest standards of excellence. You can rest assured that we welcome talented engineers regardless of their age, gender, sexual orientation, religion, ethnicity or national origin.\n\n\n\nRecruiters: Please note that we are not currently accepting referrals from recruiters for this position. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Crypto, Finance, Java, Cloud, Python, SaaS, Engineer, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.