\nWeโre a highly skilled team of software engineers who are building an awesome product and moving fast. We value people who take initiatives, and empower everyone at Klue to make a real change in the product or processes. \n\n\nWe are looking for a Senior Backend Engineer to work with our Consumer team to deliver high-quality products in the most efficient way.\n\n\n๐กFAQ \n\n\nQ: Klue who?\nA: Weโre Klue and from a technical perspective, Klueโs mission is to descale huge amounts of data to the human level, so people can process it and make use of it. Klue is that trusted intermediary, right now itโs proven for sales enablement, but tomorrow itโs all teams enablement.\n\n\nQ: What level of experience are we looking for?\nA: Right now we are looking for a Senior-level Back-end Engineer. \n\n\nQ: What is our development team working on?\nA: As part of our backend team, we are concerned with data storage and retrieval and the infrastructure to enable that. Hereโs what our development team is working on and the opportunity for motivated Software Engineers to dig into, alongside us:\n- Big Data - lots of data \n- Ingesting thousands of news articles, web pages, marketing and sales data points per day. The challenge is indexing them for a long period of time and making them searchable and ready for different analysis.\n- Expanding our Rails REST API and offering public APIs to enable integrations.\n- Architect infrastructure for a scalable, resilient and robust service. We are migrating from a monolith architecture to K8S-hosted microservices. \n\n\nQ: What tech stack is this team working with?\nA: Ruby (Rails), Python (Flask), PostgreSQL, Elasticsearch, Redis, GCP, AWS, Tensorflow, Keras, Docker, Kubernetes.\nWe code review all changes, continuously integrate, pay down technical debt, and aim for high automated test coverage. We love microservices and, while we mostly use Python, Ruby, Google Cloud Platform, Linux, JavaScript, and React, new services can be built using whatever tools make sense to get the job done and support our game-changing innovation.\n\n\nQ: Are you HYBRID FRIENDLY ๐คฉ ?\nA: YES! Hybrid. Best of both worlds (remote & in-office)\nOur main Canadian hubs are in Vancouver and Toronto, and most of our teams are located in EST and PST.\nYou and your team will be in office at least 2 days per week.\n\n\n\nQ: What skills do you bring? \n* Expertise in at least one of the general programming languages, with a strong preference for Ruby on Rails\n* Expertise in relational databases such as PostgreSQL or MySQL\n* Experience in designing REST APIs\n* Experience using NoSQL databases such as Elasticsearch or MongoDB is a plus\n* Experience using Docker, Kubernetes, AWS, GCP is a plus\n* Bonus if you have Data Engineering interest and experience; ETL Pipelines, Snowplow, Snowflake, Big Query Redshift, Airflow, or equivalent.\n\n\n\nQ: What motivates our current team right now?\n* The type of work. Challenging, stimulating and meaningful work. New and relevant tech stack. We know engineers/developers especially want to work on hard technical and innovative problems.\n* The inspiration from skilled and proven leaders.\n* Entrepreneurial fingerprints on what will be a future billion dollar company anchored in Canada.\n* Culture, team, and the work environment.\n* High degree of autonomy and accountability.\n* High degrees of transparency and high quality communication.\n\n\n\nQ: What are the people at Klue like?\n* Builders\n* Intellectually Curious\n* Ambitious\n* Objective Oriented\n* Check us out!\n\n\n\nQ: What about total compensation & benefits?\n* Benefits. We currently have extended health benefits starting on your 1st day.\n* Time off. Take what you need. We want the team to prioritize wellness and avoid burnout. Vacation usually falls into 3 categories: recharging, life-event, & keeping a work-life balance. Just ensure the required work gets done and clear it with your team in advance. You need to take at least two weeks off every year. The average Klue team member takes 2-4 weeks of PTO per year.\n\n\n\n\n$150,000 - $180,000 a yearWe gather compensation benchmarking data across the BC & Canadian Tech Industry and use that data to build a range for our current team and future talent. Your exact salary is determined by experience level, skill, capabilities, whether or not you select options, and internal pay parity.\nIf you feel like this role is a great fit and have questions about comp, get in touch and weโre happy to discuss further. There is always an ongoing conversation around compensation.\n\n\nโฌ๏ธ โฌ๏ธ โฌ๏ธ โฌ๏ธ โฌ๏ธ\n\n\n\n\n\n\n\n\nLastly, we take potential into consideration. An equivalent combination of education and experience may be accepted in lieu of the specifics listed above. If you know you have what it takes, even if thatโs different from what weโve described, be sure to explain why in your application. Reach out and letโs see if there is a home here for you now or in the future.\n\n\nWeโve made a commitment to support and contribute to a diverse environment; on our teams and in our community. Weโre early in our journey; we've started employee led resource groups, committed to Pay Up For Progress, and use success profiles for roles instead of 'years of experience'. We continue to scale our efforts as Klue grows. Weโre proud to be an equal opportunity employer and have dedicated that commitment to our current and future #kluecrew. During the interview process, please let us know if there is anything we need to make more accessible or accommodate to support you to be successful.\n\n\nAll interviews will be conducted via video calls. We work in a hybrid model of WFH (remote) and in-office. Weโre excited to meet you and in the meantime, get to know us:\n\n\n ๐ Pay Up For Progress & 50 - 30 Challenge & Klue Blog\nโ โ Win-Loss Acquisition (2023)\n๐ ฐ๏ธ Series A (2020)\n๐ Series B (2021)\n๐ Culture, culture, culture! \n๐ง Winning as Women & Competitive Enablement Show\n๐ Glassdoor\n๐ About Us\n๐ฅ Twitter\n๐ธ Instagram\nโ๏ธ LinkedIn\n๐ฆ Wellfound (AngelList) \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Video, Cloud, NoSQL, Ruby, API, Senior, Marketing, Sales, Engineer and Backend jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for IPinfo and want to re-open this job? Use the edit link in the email when you posted the job!
**About IPinfo**\n\nIPinfo is a leading provider of IP address data. Our API handles over 40 billion requests a month, and we also license our data for use in many products and services you might have used. We started as a side project back in 2013, offering a free geolocation API, and we've since bootstrapped ourselves to a profitable business with a global team of 14, and grown our data offerings to include geolocation, IP to company, carrier detection, and VPN detection. Our customers include T-Mobile, Nike, DataDog, DemandBase, Clearbit, and many more.\n\n**How We Work**\n\nWe have a small and ambitious team, spread all over the globe. We sync up on a monthly all-hands Zoom call, and most teams do a call together every 2 weeks. Everything else happens asynchronously, via Slack, GitHub, Linear, and Notion. That means you can pick the hours that work best for you, to allow you to be at your most productive.\n\nTo thrive in this environment you'll need to have high levels of autonomy and ownership. You have to be resourceful and able to work effectively in a remote setup. \n\n**The Role**\n\nWe're looking to add an experienced engineer to our 4-person data team. You'll work on improving our data, maintaining our data pipelines, defining and creating new data sets, and helping us cement our position as an industry leader. Some things we've recently been working on in the data team:\n\n* Building out our global network of probe servers, and doing internet-wide data collection (ping, traceroute, etc).\n* Finding, analyzing, and incorporating existing data sets into our pipeline to improve our quality and accuracy.\n* Building ML models to classify IP address usage as consumer ISP, hosting provider, or business.\n* Inventing and implementing scalable algorithms for IP geolocation and other big data processing.\n\nHere are some of the tools we use. Great if you have experience with these, but if not we'd expect you to ramp up quickly without any problems:\n\n* BigQuery\n* Google Composer / Apache Airflow\n* Python / Bash / SQL\n* ElasticSearch\n\nAny IP address domain knowledge would be useful too, but we can help get you up to speed here:\n* ASN / BGP / CIDR / Ping / Traceroute / Whois etc\n\n**What We Offer**\n\n* 100% remote team and work environment\n* Flexible working hours\n* Minimal meetings\n* Competitive salary\n* Flexible vacation policy\n* Interesting and challenging work \n\nPlease mention the words **ORIGINAL DESERT INDICATE** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$90,000 — $140,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
\n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Cogsy and want to re-open this job? Use the edit link in the email when you posted the job!
We're looking for a talented Full Stack Engineer that leads by example and gets stuck into everything that touches our product. Come help us shape a product of lasting value for our first and future customers.\n\n### [Cogsy](http://cogsy.com/about/)\n\nWe're building products that will help fast-growing ecommerce brands optimize their purchase decisions and grow even better. We believe in the idea of economies of better (not just economies of scale) and have strong values.\n\nWe're building our initial team and beyond our values, we want diverse, unique individuals to show up as their magical, best selves in the work they do within our team.\n\n### The role\n\nYou will be responsible for creating the early versions of Cogsy's application. This includes but is not limited to:\n\n- Product development\n- API design & development\n- Database and systems administration\n- Metrics / Growth / AB testing\n- 3rd-party integrations\n\nWe expect you to be a generalist with the ability and confidence to work on any part of our stack. These are (some of) the tools that we work with every day:\n\n- NodeJs\n- React\n- Database (MongoDB / PostgreSQL)\n- Aggregation engine (Elasticsearch)\n- Caching (Redis)\n- Async messaging (RabbitMq)\n- Bonus: Python experience\n\nThis is a remote position and you can work from wherever. It is however important that we maintain connectedness as a team and have sufficient time for synchronous work too. We'd prefer team members that are on CET or EST (or +- 1 hour difference) or work on those schedules, as that means that there is 3 / 4 hours overlap for the whole team every day.\n\n### Requirements\n\nIf you were to join Cogsy today, you'd be one of the first team members and can have great influence on the next steps we take.\n\nYou're likely a good fit for this position if you:\n\n- **[Read these values](https://cogsy.com/about/#headline-428-14)** and they resonate with you.\n- Are a true product builder and can make progress both independently and within a team.\n- Can put an infrastructure in place that handles / parses a lot of data.\n- Can move fast and help us ship a first version (that is revenue-ready) in a cost- and time-efficient manner.\n- Have always wanted to build your own team.\n- Take action and pay attention to detail.\n- Have superior communication skills\n- Have professional experience building B2B web applications.\n\n### Salary\n\n7**0-120,000 USD** depending on level of seniority / experience. We'll assess seniority during interview and a brief test project.\n\n### Benefits\n\n- True **flexible** work: work wherever and however you need to work to be at your best **and** ensure you stay connected to the team.\n- Once global travel is open again, we'll do **week-long team retreats** in fun locations. All expenses paid of course.\n- A **Minimum holiday policy**, which basically means you take time off whenever you need it to recharge or attend to other matters. And the team will hold you accountable to taking a minimum amount of time off in any rolling 12-month window.\n- Parental leave for those individuals that plan to discover to joys of having (more) kids.\n- A **health insurance** (powered by [Safety Wing](https://safetywing.com/remote-health)) tailored for remote team members, whether you're at home, traveling or being a nomad.\n- Monthly **learning** and **wellness allowance**. Buy books, pay for your yoga class or get a Calm subscription for greater mindfulness. Whatever helps you develop as an individual and *the best you* is what we'll pay for.\n- We are a **life- and family-first** company that seeks meaningful experiences outside of work and we endeavor to help our customers do the same. \n\nPlease mention the words **YARD VERY DAY** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
\n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
We are looking for a Lead DevOps engineer to join our team at Prominent Edge. We are a small, stable, growing company that believes in doing things right. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want engineers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Many of our projects are web applications which often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com/ for more information and apply through https://prominentedge.com/careers.\n\nRequired skills:\n* Experience as a Lead Engineer.\n* Minimum of 8 years of total experience to include a minimum of 1 years of web or software development experience.\n* Experience automating the provisioning of environments by designing, implementing, and managing configuration and deployment infrastructure as code solutions.\n* Experience delivering scalable solutions utilizing Amazon Web Services: EC2, S3, RDS, Lambda, API Gateway, Message Queues, and CloudFormation Templates.\n* Experience with deploying and administering kubernetes on AWS or GCP or Azure.\n* Capable of designing secure and scalable solutions.\n* Strong nix administration skills.\n* Development in a Linux environment using Bash, Powershell, Python, JS, Go, or Groovy\n* Experience automating and streamlining build, test, and deployment phases for continuous integration\n* Experience with automated deployment technologies such as Ansible, Puppet, or Chef\n* Experience administering automated build environments such as Jenkins and Hudson\n* Experience configuring and deploying logging and monitoring services - fluentd, logstash, GeoHashes, etc.\n* Experience with Git/GitHub/GitLab.\n* Experience with DockerHub or a container registry.\n* Experience with building and deploying containers to a production environment.\n* Strong knowledge of security and recovery from a DevOps perspective.\n\nBonus skills:\n* Experience with RabbitMQ and administration.\n* Experience with kops.\n* Experience with HashiCorp Vault, administration, and Goldfish; frontend Vault UI.\n* Experience with helm for deployment to kubernetes.\n* Experience with CloudWatch.\n* Experience with Ansible and/or a configuration management language.\n* Experience with Ansible Tower; not necessary.\n* Experience with VPNs; OpenVPN preferable.\n* Experience with network administration and understanding network topology and architecture.\n* Experience with AWS spot instances or Google preemptible.\n* Experience with Grafana administration, SSO (okta or jumpcloud preferable), LDAP / Active Directory administration, CloudHealth or cloud cost optimization.\n* Experience with kubernetes-based software - example - heptio/ark, ingress-nginx, anchore engine.\n* Familiarity with the ELK Stack\n* Familiarity with basic administrative tasks and building artifacts on Windows\n* Familiarity with other cloud infrastructures such as Cloud Foundry\n* Strong web or software engineering experience\n* Familiarity with security clearances in case you contribute to our non-commercial projects.\n\nW2 Benefits:\n* Not only you get to join our team of awesome playful ninjas, we also have great benefits:\n* Six weeks paid time off per year (PTO+Holidays).\n* Six percent 401k matching, vested immediately.\n* Free PPO/POS healthcare for the entire family.\n* We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.\n* Want to take time off without using vacation time? Shuffle your hours around in any pay period.\n* Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, weโll buy you the new version whenever you want.\n* Want some training or to travel to a conference that is relevant to your job? We offer that too!\n* This organization participates in E-Verify.\n\nAbout You:\n* You believe in and practice Agile/DevOps.\n* You are organized and eager to accept responsibility.\n* You want a seat at the table at the inception of new efforts; you do not want things "thrown over the wall" to you.\n* You are an active listener, empathetic and willing to understand and internalize the unique needs and concerns of each individual client.\n* You adjust your speaking style for your audience and can interact successfully with both technical and non-technical clients.\n* You are detail-oriented but never lose sight of the Big Picture.\n* You can work equally well individually or as part of a team.\n* U.S. citizenship required\n\n\n \n\nPlease mention the words **RIPPLE DESK VERSION** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nUnited States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for YouGov and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nCrunch.io part of the YouGov PLC are looking for a number of talented Python Developers to join their fully remote teams. \n\nCrunch.io is a market-defining company in the analytics SaaS marketplace. We’ve built a revolutionary platform that transforms our customers’ ability to drive insight from market research and survey data. We offer a complete survey data analysis platform that allows market researchers, analysts, and marketers to collaborate in a secure, cloud-based environment, using a simple, intuitive drag-and-drop interface to prepare, analyze, visualize and deliver survey data and analysis.\n\nQuite simply, Crunch provides the quickest and easiest way for anyone, from CMO to PhD, with zero training, to analyze survey data. Users create tables, charts, graphs and maps. They filter, and slice-and-dice survey data directly in their browser.\n\nTech Stack:\n\nWe currently run our in-house production Python code against Redis, MongoDB, and ElasticSearch services. We proxy API requests through NGINX, load balance with ELBs, and deploy our React web application to AWS CloudFront CDN. Our current CI/CD process is built around GitHub, Jenkins, BlueOcean including unit, integration, and end to end tests and automated system deployments. We deploy to Auto Scaling Groups using Ansible and Cloud-Init.\n\nWhat will I be doing?\n\n\n* Develop performance enhancements and new features in Crunch’s proprietary Python in-memory database.\n\n* Work closely with product managers, sales, and customer success team to understand the system’s functional and non-functional requirements.\n\n* Establish realistic estimates for timelines and ensure that project remains on target to meet deadlines.\n\n* Contribute to code quality through unit testing, integration testing, code review, and system design using Python.\n\n* Assist in diagnosing and fixing system failures quickly when they occur in your area of expertise. This is limited to when the on-call rotation needs a subject matter expert to help troubleshoot an issue.\n\n* Design and implement RESTful API endpoints using the Python programming language.\n\n\n\n\nExperience / Qualifications:\n\n\n* Strong understanding of the software development lifecycle.\n\n* A record of successful delivery of SaaS and cloud-based applications.\n\n* Extensive programming experience using Python as a programming language\n\n* A commitment to producing robust, testable code.\n\n* Experience with data locality problems and caching issues\n\n* Expertise writing Cython or C extensions\n\n* Deep understanding of how a database system works internally (indexing, extents, memory management, concurrency, durability, journal)\n\n* Results-driven, self-motivated and enthusiastic.\n\n* Experience working in a Linux environment\n\n* Experience with client/server architectures\n\n* A keen interest in learning new things.\n\n\n\n\nDesirable Experience:\n\n\n* Expertise with the numpy library\n\n* Experience implementing custom messaging protocols (sequence numbers, ttl, etc)\n\n* Database experience using MongoDB and ElasticSearch\n\n* Bachelor’s Degree in Programming, Computer Science, or Engineering-related field.\n\n* Pytest testing experience\n\n* Design and deployment of Continuous Integration tools (e.g., Jenkins, Bamboo, Travis, etc)\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Developer, Digital Nomad, React, Elasticsearch, C, API, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for YouGov and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nCrunch.io part of the YouGov PLC are looking for a number of talented Python Developers to join their fully remote teams. \n\nCrunch.io is a market-defining company in the analytics SaaS marketplace. We’ve built a revolutionary platform that transforms our customers’ ability to drive insight from market research and survey data. We offer a complete survey data analysis platform that allows market researchers, analysts, and marketers to collaborate in a secure, cloud-based environment, using a simple, intuitive drag-and-drop interface to prepare, analyze, visualize and deliver survey data and analysis.\n\nQuite simply, Crunch provides the quickest and easiest way for anyone, from CMO to PhD, with zero training, to analyze survey data. Users create tables, charts, graphs and maps. They filter, and slice-and-dice survey data directly in their browser.\n\nTech Stack:\n\nWe currently run our in-house production Python code against Redis, MongoDB, and ElasticSearch services. We proxy API requests through NGINX, load balance with ELBs, and deploy our React web application to AWS CloudFront CDN. Our current CI/CD process is built around GitHub, Jenkins, BlueOcean including unit, integration, and end to end tests and automated system deployments. We deploy to Auto Scaling Groups using Ansible and Cloud-Init.\n\nWhat will I be doing?\n\n\n* Develop performance enhancements and new features in Crunch’s proprietary Python in-memory database.\n\n* Work closely with product managers, sales, and customer success team to understand the system’s functional and non-functional requirements.\n\n* Establish realistic estimates for timelines and ensure that project remains on target to meet deadlines.\n\n* Contribute to code quality through unit testing, integration testing, code review, and system design using Python.\n\n* Assist in diagnosing and fixing system failures quickly when they occur in your area of expertise. This is limited to when the on-call rotation needs a subject matter expert to help troubleshoot an issue.\n\n* Design and implement RESTful API endpoints using the Python programming language.\n\n\n\n\nExperience / Qualifications:\n\n\n* Strong understanding of the software development lifecycle.\n\n* A record of successful delivery of SaaS and cloud-based applications.\n\n* Extensive programming experience using Python as a programming language\n\n* A commitment to producing robust, testable code.\n\n* Experience with data locality problems and caching issues\n\n* Expertise writing Cython or C extensions\n\n* Deep understanding of how a database system works internally (indexing, extents, memory management, concurrency, durability, journal)\n\n* Results-driven, self-motivated and enthusiastic.\n\n* Experience working in a Linux environment\n\n* Experience with client/server architectures\n\n* A keen interest in learning new things.\n\n\n\n\nDesirable Experience:\n\n\n* Expertise with the numpy library\n\n* Experience implementing custom messaging protocols (sequence numbers, ttl, etc)\n\n* Database experience using MongoDB and ElasticSearch\n\n* Bachelor’s Degree in Programming, Computer Science, or Engineering-related field.\n\n* Pytest testing experience\n\n* Design and deployment of Continuous Integration tools (e.g., Jenkins, Bamboo, Travis, etc)\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Developer, Digital Nomad, React, Elasticsearch, C, API, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Backtracks.fm and want to re-open this job? Use the edit link in the email when you posted the job!
Build delightful software for podcasts and spoken word audio. [Backtracks](https://backtracks.fm/?ref=careers) is seeking a qualified **Senior Python Developer** with some PEP to join our Product & Engineering Team.\n\n## Opportunity\n\n[Backtracks](https://backtracks.fm/?ref=careers) helps audio content creators and brands know and grow their audiences and revenue. You will be responsible for building the Python-side of our web applications, platform, and APIs to deliver delightful experiences to our users and fellow developers.\n\n### Your day-to-day: \n\n* Design an implement services and solutions in Python\n* Code and deploy new features in collaboration with our product and engineering teams\n* Be part of a small team, with a large amount of ownership and responsibility for managing things directly\n* Ship high-quality solutions with a sense of urgency and speed\n* Work closely with both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.\n* Have the freedom to suggest and drive initiatives\n* We expect you to be a tech-savvy professional, who is curious about new digital technologies and aspires to simple and elegantly Pythonic code.\n\n### You have: \n\n* History of autonomous design decision making at technically focused companies\n* History of designing and building web components, products, and technology\n* Experience working on design and development in any of the following roles:\n * Python Developer\n * Python Engineer\n * Full Stack Developer\n * Full Stack Engineer\n * Product Developer\n * Product Engineer\n * Software Architect\n * BDFL\n* In-depth understanding of the entire development process (design, development and deployment)\n* Strong knowledge of:\n * Python 3.6+\n * Distributed systems design and implementation\n * Messaging systems and technologies (e.g. RabbitMQ, Kafka, etc.)\n * Docker\n* Confidence or experience working with some or all of the following:\n * Flask, Sanic, FastAPI, Apache Mod WSGI\n * PyTorch, Tensorflow, Keras\n * Spark, Flink\n * Jinja2, HTML, CSS, JavaScript\n * SQLAlchemy\n * Celery\n * Solr, Elasticsearch, or Lucene\n * AWS, Google Cloud, Azure\n * CI/CD deployment processes\n* Motivation and an enjoyment for a startup environment\n* Systematic thinker (consider how components can scale across our platform and product offerings)\n* The ability to code and build out designs independently\n* A blend of product, system, and people knowledge that lets you jump into a fast paced environment and contribute from day one\n* An ability to balance a sense of urgency with shipping high quality and pragmatic solutions\n* Strong work sense of ownership\n* Strong collaboration and communication skills (fluency in English)\n* PMA (Positive Mental Attitude)\n* Bachelor's degree in Computer Science or relevant field and/or relevant work experience\n* 5+ years professional experience\n\n### Other qualities and traits: \n\n* Passion for podcasts, radio, and spoken word audio\n* Passion for delivering high-quality software with quick turnaround times (e.g. you ship)\n* A product-first approach to building software\n* An enthusiasm for hard problems\n* Thrives in a fast-paced environment\n\n### Bonus points if you have experience with: \n\n* Building Python-based REST APIs\n* Apache AirFlow\n* Jenkins, Drone, Github Actions, or CircleCI for CI/CD\n* Audio analysis and processing \n\nPlease mention the words **FLOWER COLUMN SISTER** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Elasticsearch, Python, API, Senior, Full Stack, Redis, Rabbitmq, Web Developer, Developer, Digital Nomad and Apache jobs that are similar:\n\n
$60,000 — $130,000/year\n
\n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Atlastix and want to re-open this job? Use the edit link in the email when you posted the job!
We're looking for an ELK guru with a tonne of experience building custom connectors.\n\nWe require automation for Ingest pipelines / dashboard builds, for huge list of applications. \n\nDashboards should be designed to most intuitively show the most valuable insights. \n\nFinished module will consist of:\nโข Kibana JSON objects for index-pattern, visualizations and dashboards\nโข Logstash pipeline configuration (or Elasticsearch ingest pipeline - but logstash is preferred)\nโข Elasticsearch index template\nโข Data source(s), one or more of:\n- API connector in Python or Golang\n- Custom beat implementation based on libbeat\n- Logstash input plugin in Ruby\n\nArtefacts should be amenable to centralised management/deployment with Ansible, whether through templated logstash/beats configuration or POSTing to the Kibana saved objects API. All work must be compatible with ELK 7.1.1-oss. Where existing code is leveraged, it should be BSD / Apache licensed or similar - GPL-type licenses are acceptable if necessary but must be isolated. Proprietary-licensed code must not be used, eg the Elastic license.\n\n# Responsibilities\n
Build connectors consisting of:\n\nโข Kibana JSON objects for index-pattern, visualizations and dashboards\nโข Logstash pipeline configuration (or Elasticsearch ingest pipeline - but logstash is preferred)\nโข Elasticsearch index template\nโข Data source(s), one or more of:\n- API connector in Python or Golang\n- Custom beat implementation based on libbeat\n- Logstash input plugin in Ruby\n \n\n# Requirements\nLong list of previous experience in ELK and custom data source ingestion, enrichment, visualisation. \n\nPlease mention the words **SORT HORN YEAR** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Elasticsearch, Python, Ruby, Developer, Digital Nomad, API and Apache jobs that are similar:\n\n
$70,000 — $130,000/year\n
\n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Perch Security and want to re-open this job? Use the edit link in the email when you posted the job!
\nDevelopers at Perch write clean and maintainable Python3 and modern Javascript.\n\nOn the backend; we mainly use the battle-tested Django Rest Framework to create scalable, robust, queryable REST APIs. We architect performant database tables and queries in Postgresql, query our multi-terabyte Elasticsearch, connect to microservices, as well as 3rd party APIs to compose the data returned by our endpoints. We use Redis to cache expensive calls where necessary. We use Docker and AWS to support our infrastructure. On the frontend; we have a single page application written in React that connects to our Django API for data.\n\n Our growing development team follows an agile workflow; planning projects that can be broken down into tasks that can be completed in two-week sprints. If you’re a strong technology generalist who loves learning new things and isn’t afraid to dive in and figure things out, Perch might be the place for you.\n\nA day in the life\n\n\n* Work with a team of developers, designers, and stakeholders to plan, build, and deliver updates to our core products and services every sprint.\n\n* Write, test, and ship code for our production Django API.\n\n* Debug errors that might crop up and write patches to fix them.\n\n* Design database tables for new features.\n\n* Refactor and improve existing code for greater simplicity or performance.\n\n* Write code to integrate with 3rd party partners and data sources.\n\n* Write, test, and ship code for our production React app that consumes REST APIs (and possibly GraphQL in the future)\n\n* Write, test, and ship code for multiple Node.js services that consume and produce REST APIs (and possibly GraphQL in the future)\n\n* Work independently to identify bottlenecks and sources of potential failure and improve them.\n\n* Create, maintain, and monitor backend services deployed with AWS for things like email processing, data visualization, and data transformation ( at a pretty large scale )\n\n* Participate in code review and collaborate with other developers to ensure we’re shipping high-quality code and products.\n\n\n\n\n\n\nA perfect match\n\n\n* You have extensive experience writing modern, testable Python code with a team of developers.\n\n* You have experience with a web framework such as Django (Django Rest Framework) or Flask.\n\n* You are comfortable creating relational database models, and preferably have some experience with Postgresql.\n\n* You have experience writing code for web APIs and know what HTTP status codes to use when. You know when to use POST vs PUT requests and some REST API concepts.\n\n* You know some Linux and aren’t afraid to SSH into a server to check out what’s going at the operating system level. Checking disk usage, running processes, or tailing logs.\n\n* You have experience with a modern Javascript framework (NodeJs, Express, React).\n\n* You can follow patterns established by Javascript developers and make changes to React code.\n\n* You have experience querying Elasticsearch.\n\n\n\n\n\n\nAbove and beyond\n\n\n* Experience testing code with PyTest\n\n* Experience with Elasticsearch and other Elastic products\n\n* Amazon Web Services ( RDS, EC2, S3, Beanstalk, and seemingly a million others )\n\n* CI/CD ( Docker, Jenkins, GitHub, or similar )\n\n* Some networking experience, you know what a subnet is\n\n* Cybersecurity interest or background\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to InfoSec, Senior, Full Stack, Developer, Digital Nomad, React, JavaScript, Elasticsearch, Python, API, Linux and Backend jobs that are similar:\n\n
$62,500 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for YouGov and want to re-open this job? Use the edit link in the email when you posted the job!
\nCrunch.i, part of the YouGov PLC is a market-defining company in the analytics SaaS marketplace. We’re a company on the rise. We’ve built a revolutionary platform that transforms our customers’ ability to drive insight from market research and survey data. We offer a complete survey data analysis platform that allows market researchers, analysts, and marketers to collaborate in a secure, cloud-based environment, using a simple, intuitive drag-and-drop interface to prepare, analyze, visualize and deliver survey data and analysis. Quite simply, Crunch provides the quickest and easiest way for anyone, from CMO to PhD, with zero training, to analyze survey data. Users create tables, charts, graphs and maps. They filter, and slice-and-dice survey data directly in their browser.\n\nOur start-up culture is casual, respectful of each other’s varied backgrounds and lives, and high-energy because of our shared dedication to our product and our mission. We are loyal to each other and our company. We value work/life balance, efficiency, simplicity, and fantastic customer service! Crunch has no offices and fully embraces a 100% remote culture. We have 40 employees spread across 5 continents. Remote work at Crunch is flexible and largely independent, yet highly cooperative.\n\nWe are hiring a DevOps Lead to help expand our platform and operations excellence. We are inviting you to join our small, fully remote team of developers and operators helping make our platform faster, more secure, and more reliable. You will be self-motivated and disciplined in order to work with our fully distributed team.\n\nWe are looking for someone who is a quick study, who is eager to learn and grow with us, and who has experience in DevOps and Agile cultures. At Crunch, we believe in learning together: we recognize that we don’t have all the answers, and we try to ask each other the right questions. As Crunch employees are completely distributed, it’s crucial that you can work well independently, and keep yourself motivated and focused.\n\nOur Stack:\n\nWe currently run our in-house production Python code against Redis, MongoDB, and ElasticSearch services. We proxy API requests through NGINX, load balance with ELBs, and deploy our React web application to AWS CloudFront CDN. Our current CI/CD process is built around GitHub, Jenkins, BlueOcean including unit, integration, and end to end tests and automated system deployments. We deploy to auto-scaling Groups using Ansible and Cloud-Init.\n\nIn the future, all or part of our platform may be deployed via DroneCI, Kubernetes, nginx ingress, Helm, and Spinnaker.\n\nWhat you'll do:\n\nAs a Leader:\n\n\n* Manage and lead a team of Cloud Operations Engineers who are tasked with ensuring our uptime guarantees to our customer base.\n\n* Scale the worldwide Cloud Operations Engineering team with the strategic implementation of new processes and tools.\n\n* Hire and ramp exceptional Cloud Operations Engineers.\n\n* Assist in scoping, designing and deploying systems that reduce Mean Time to Resolve for customer incidents.\n\n* Inform executive leadership and escalation management personnel of major outages\n\n* Compile and report KPIs across the full company.\n\n* Work with Sales Engineers to complete pre-sales questionnaires, and to gather customer use metrics.\n\n* Prioritize projects competing for human and computational resources to achieve organizational goals.\n\n\n\n\nAs an Engineer:\n\n\n* Monitor and detect emerging customer-facing incidents on the Crunch platform; assist in their proactive resolution, and work to prevent them from occurring.\n\n* Coordinate and participate in a weekly on-call rotation, where you will handle short term customer incidents (from direct surveillance or through alerts via our Technical Services Engineers).\n\n* Diagnose live incidents, differentiate between platform issues versus usage issues across the entire stack; hardware, software, application and network within physical datacenter and cloud-based environments, and take the first steps towards resolution.\n\n* Automate routine monitoring and troubleshooting tasks.\n\n* Cooperate with our product management and engineering organizations by identifying areas for improvement in the management of applications powering the Crunch infrastructure.\n\n* Provide consistent, high-quality feedback and recommendations to our product managers and development teams regarding product defects or recurring performance issues.\n\n* Be the owner of our platform. This includes everything from our cloud provider implementation to how we build, deploy and instrument our systems.\n\n* Drive improvements and advancements to the platform in areas such as container orchestration, service mesh, request/retry strategies.\n\n* Build frameworks and tools to empower safe, developer-led changes, automate the manual steps and provide insight into our complex system.\n\n* Work directly with software engineering and infrastructure leadership to enhance the performance, scalability and observability of resources of multiple applications and ensure that production hand off requirements are met and escalate issues.\n\n* Embed into SRE projects to stay close to the operational workflows and issues.\n\n* Evangelize the adoption of best practices in relation to performance and reliability across the organization.\n\n* Provide a solid operational foundation for building and maintaining successful SRE teams and processes.\n\n* Maintain project and operational workload statistics.\n\n* Promote a healthy and functional work environment.\n\n* Work with Security experts to do periodic penetration testing, and drive resolution for any issues discovered.\n\n* Liaise with IT and Security Team Leads to successfully complete cross-team projects, filling in for these Leads when necessary.\n\n* Administer a large portfolio of SaaS tools used throughout the company.\n\n\n\n\nQualifications:\n\n\n* Team Lead experience of an on-call DevOps, SRE, or Cloud Operations team (at least 2 years).\n\n* Experience recruiting, mentoring, and promoting high performing team members.\n\n* Experience being an on-call DevOps, SRE, or Cloud Operations engineer (at least 2 years).\n\n* Proven track record of designing, building, sizing, optimizing, and maintaining cloud infrastructure.\n\n* Proven experience developing software, CI/CD pipelines, automation, and managing production infrastructure in AWS.\n\n* Proven track record of designing, implementing, and maintaining full CI/CD pipelines in a cloud environment (Jenkins experience preferred).\n\n* Experience with containers and container orchestration tools (Docker, Kubernetes, Helm, traefik, Nginx ingress and Spinnaker experience preferred).\n\n* Expertise with Linux system administration (5 yrs) and networking technologies including IPv6.\n\n* Knowledgeable about a wide range of web and internet technologies.\n\n* Knowledge of NoSQL database operations and concepts.\n\n* Experience in monitoring, system performance data collection and analysis, and reporting.\n\n* Capability to write small programs/scripts to solve both short-term systems problems and to automate repetitive workflows (Python and Bash preferred).\n\n* Exceptional English communication and troubleshooting skills.\n\n* A keen interest in learning new things.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Executive, React, English, Elasticsearch, Cloud, NoSQL, Python, API, Sales, SaaS, Engineer, Nginx and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Close and want to re-open this job? Use the edit link in the email when you posted the job!
**About Us**\n\nAt[ Close](https://close.com/), we're building the sales communication platform of the future. With our roots as the very first sales CRM to include built-in calling, we're leading the industry toward eliminating manual processes and helping companies to close more deals (faster). Since our founding in 2013, we've grown to become a profitable, 100% globally distributed team of ~33 high-performing, happy people that are dedicated to building a product our customers love.\n\nOur backend [tech stack](https://stackshare.io/close-crm/close) currently consists of Python Flask/Gunicorn web apps with our TaskTiger scheduler handling many of the backend asynchronous task processing. Our data stores include MongoDB, Postgres, Elasticsearch, and Redis. The underlying infrastructure runs on AWS using a combination of managed services like RDS and ElasticCache and non-managed services running on EC2 instances. All of our compute runs through CI/CD pipelines that build Docker images, run automated tests and deploy to our Kubernetes clusters. Our backend primarily serves a well-documented [public API ](https://developer.close.com/) that our front-end JavaScript app consumes.\n\nWe โค๏ธopen source โ using dozens of open source projects with contributions to many of them, and released some of our own like [ciso8601](https://github.com/closeio/ciso8601), [LimitLion](https://github.com/closeio/limitlion), [SocketShark](https://github.com/closeio/socketshark), [TaskTiger](https://github.com/closeio/tasktiger), and more at https://github.com/closeio\n\n\n**About You**\n\nWe're looking for an experienced full-time Software Engineer to join our engineering team. Someone who has a solid understanding of web technologies and wants to help design, implement, launch, and scale major systems and user-facing features.\n\nYou should have senior level experience (~5 years) building modern back-end systems, with at least 3 years of that experience using Python.\n\nYou also have around five years experience using MongoDB, PostgreSQL, Elasticsearch, or similar data stores. You have significant experience designing, scaling, debugging, and optimizing systems to make them fast and reliable. You have experience participating in code reviews and providing overall code quality suggestions to help maintain the structure and quality of the codebase.\n\nYouโre comfortable working in a fast-paced environment with a small and talented team where you're supported in your efforts to grow professionally. You are able to manage your time well, communicate effectively and collaborate in a fully distributed team.\n\nYou are located in an American or European time zone.\n\n\n**Bonus point if you have**\n\n* Contributed open source code related to our tech stack\n* Led small project teams building and launching features\n* Built B2B SaaS products\n* Experience with sales or sales tools\n\n**Come help us with projects like**\n\n* Conceiving, designing, building, and launching new user-facing features\n* Improving the performance and scalability our API. Help expand our GraphQL implementation.\n* Improving how we [sync ](https://close.com/emailing/) millions of sales emails each month\n* Working with Twilio's API, WebSockets, and WebRTC to improve our [calling features](https://close.com/calling/)\n* Building user-facing analytics features that provide actionable insights based on sales activity data\n* Improving our Elasticsearch-backed powerful [search features](https://close.com/search/)\n* Improving our internal messaging infrastructure using streaming technologies like Kafka and Redis \n* Building new and enhancing existing integrations with other SaaS platforms like Googleโs G Suite, Zapier, and Web Conferencing providers\n\n\n**Why work with us?**\n\n* 100% Remote *(we believe in trust and autonomy)*\n* 2 x Annual Team Retreats โ๏ธ ([Lisbon Retreat Video](https://www.youtube.com/watch?v=gKjyXMz-q-Q&feature=youtu.be))\n* Competitive salary\n* Medical, Dental with HSA option - 99% premiums paid (*US residents)*\n* 5 Weeks PTO + 6 Government Holidays + Dec 24 - Jan 1 Company Holiday\n* Parental Leave *(10 wks primary caregiver / 4 wks secondary caregiver)*\n* 401k matching at 4% *(US residents)*\n* [Our story and team](https://close.com/about/) ๐\n* [Glassdoor Reviews ](https://www.glassdoor.co.uk/Reviews/Close-Reviews-E1155591.htm)\n\nAt Close, everyone has a voice. We encourage transparency and practicing a mature approach to the work-place. In general, we donโt have strict policies, we have guidelines. Work/Life harmony is an important part of our organization - we believe you bring your best to work when you practice self care (whatever that looks like for you).\n\nWe come from 12 countries and 14 states; a collection of talented humans rich in diverse backgrounds, lifestyles and cultures. Twice a year we meet up somewhere around the world to spend time with one another. We see these retreats as an opportunity to strengthen the social fiber of our community.\n\nThis team is growing in more ways than one - weโve recently launched 8 babies (and counting!). Unanimously, our favorite and most impactful value is โBuild a house you want to live in.โ We strive to make decisions that are authentic for our organization. At Close, we have a high care factor for one another, in making an awesome product and championing the success of our customers. \n\n*Interested in Close but don't think this role is the best fit for you? View [our other positions](https://jobs.lever.co/close.io/).* \n\nPlease mention the words **NEGLECT YEAR POSITION** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Elasticsearch, Python, Senior, Engineer, Backend, Redis, Developer, Digital Nomad, JavaScript, API, Sales and SaaS jobs that are similar:\n\n
$62,500 — $125,000/year\n
\n\n#Benefits\n
โฐ Async\n\n๐ Distributed team\n\n
\n\n#Location\nAMERICAS OR EUROPEAN TIME ZONE
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Close.io and want to re-open this job? Use the edit link in the email when you posted the job!
**ABOUT US**\n\n\n\nAt Close.io weโre building the sales communication platform of the future. Weโve built a next-generation CRM that eliminates manual data entry and helps sales teams close more deals. We are hiring engineers to join our DevOps team to help take our platform to the next level by adding new features, providing better performance, and even higher reliability. We are a ~30 person entirely remote team (with ~13 engineers) that is profitable and building a product our customers love.\n\n\n\nOur backend tech stack currently consists of Python Flask/Gunicorn web apps with our TaskTiger scheduler handling many of the backend asynchronous processing tasks. Our data stores include MongoDB, Elasticsearch, PostgreSQL, MySQL, and Redis. Our infrastructure runs on AWS using a combination of managed services like RDS and ElastiCache as well as EC2 instances managed by Puppet. These EC2 instances run everything from our databases to our Kubernetes clusters.\n\n\n\n**ABOUT YOU**\n\n\n\nWe're looking for a full-time DevOps Engineering Manager / Team Lead to join our core team who has significant experience in building, managing, and monitoring infrastructure and backend services at scale.\n\nIn this Team Lead role, you will be doing *both* hands-on technical work yourself *and* managing/leading a small remote team (2-3 people) of exceptional Senior SRE/DevOps Engineers.\n\n\n\n**Come help us with projects like**\n\n\n\n* Building out our Kubernetes infrastructure to include additional services, increased redundancy/scalability, and harnessing new k8s features\n\n* Scale our Elasticsearch and MongoDB clusters to support our data growth\n\n* Tune our MySQL and PostgreSQL databases\n\n* Improve our public Close.io API performance and resiliency\n\n* Tighten security across our infrastructure\n\n* Implement autoscaling techniques that balance performance, workload demands and costs\n\n* Improve our CI/CD process making builds/deployments faster and safer\n\n* Further develop our Prometheus/Grafana monitoring infrastructure\n\n* Enhance our Elasticsearch/Logstash/Kibana (ELK) logging stack\n\n\n\n**And leadership responsibilities like**\n\n\n\n* Being responsible for the happiness, well-fare, and productivity of the DevOps team; do regular 1:1s.\n\n* Leading the strategy, roadmap, and goals for the DevOps team\n\n* Coordinating with other engineers on current and upcoming infrastructure needs for Product and overall business goals\n\n* Having the primary responsibility for the stability, performance, and security of our infrastructure\n\n* Hiring/growing the team as needed, while keeping a high quality bar\n\n\n\n**You should**\n\n\n\n* Have 2+ years managing & leading an engineering team, including experience recruiting/hiring great engineers\n\n* Have a servant leadership attitude with the ability to foster a collaborative & positive energy work environment, especially when PagerDuty is calling\n\n* Have real world experience building scalable systems, working with large data sets, and troubleshooting various back-end challenges under pressure\n\n* Have significant experience with *nix, Python, bash, and Puppet or similar backend systems and frameworks\n\n* Experience working with large databases running on MySQL, PostgreSQL, and/or Mongo\n\n* Have experience configuring monitoring, logging, and other tools to provide visibility and actionable alerts\n\n* Enjoy automating processes using Python, bash, Puppet, or other scripting languages\n\n* Have experience with Docker containers and microservices\n\n* Understand the full web stack, networking, and low level Unix computing\n\n* Always be thinking of ways to improve reliability, performance, and scalability of the infrastructure\n\n\n\n**Why work with us?**\n\n\n\n* Our story and [team retreat video!](https://www.youtube.com/watch?v=gKjyXMz-q-Q&feature=youtu.be)\n\n* Stellar team reviews on [Glassdoor](https://www.glassdoor.com/Reviews/Close-io-Reviews-E1155591.htm)\n\n* Work remotely and create your own schedule (we believe in trust and autonomy)\n\n* Enjoy face-to-face time with the whole team on all-expense paid retreats 2x year.\n\n* Experience building a truly successful SaaS company with a dedicated, small team where you can have a huge impact\n\n* Above market salary\n\n* Excellent medical & dental coverage, including 99% paid premiums and HSA option (**US residents)\n\n* Matching 401k (**US residents)\n\n\n\nInterested in DevOps but think this Team Lead role isn't the right fit? View our [SRE/DevOps Engineering role](https://jobs.lever.co/close.io/38d0c4ac-c3eb-47e9-a49e-4611f96eef8d) or all our other positions. \n\nPlease mention the words **THUMB PERSON ROUTE** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Elasticsearch, Engineer, Redis, Executive, Python, API, Senior, Sales, SaaS, Medical and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Engage Technology Partners and want to re-open this job? Use the edit link in the email when you posted the job!
\nEngage builds software to make finding, hiring, and paying workers painless. Our software is causing waves in the UK recruitment industry, serving end hirers, recruitment agencies and workers, and we're looking to continue growing our product development team.\n\n\nTeam and Culture\nWe're a diverse team of people, both professionally and personally, bringing together a broad range of skill sets, experiences, backgrounds and cultures. We believe this helps us build better products for our diverse user base. It's for this reason that we have built a kind, supportive, inclusive team. We have team members from 19 different nationalities, and no gender pay gap.\nWe pride ourselves on being flexible and family-friendly - it's not unusual to see a baby in a team meeting. Being your best at work means having balance outside it - if you need to pick the kids up from school, visit a sick relative, or just want to walk the dog while the sun is shining, that's part of life. We expect you to work sensible hours and take your holidays.\nEngineering is spread across Europe, because we believe you can ship great software from anywhere. Our office is in London, where some of our product development team is based, including Product Management and Design - remote brainstorming can be really hard. We live in Slack, remote people come to London regularly, and we work very hard to make everyone feel included.\n\n\nThe Role\nWe are looking for a Senior DevOps Engineer to join our infrastructure team, responsible for:\n * Keeping our cloud infrastructure secure and highly available.\n * Supporting engineers with the necessary tools to ship product easily.\n * Facilitating monitoring, alerting and maintenance.\n * Working with our development team on adopting new offerings from AWS or other providers.\n \n\nWe run on Amazon Web Services and script using CloudFormation, Python and Puppet. Our application services are written in Java and Node, and run in Docker containers in Amazon ECS. \nDeployments happen multiple times per week, sometimes multiple times per day. We also make use of Lambda, API Gateway, Elasticsearch and Redis.\n\nBenefits\n\n * Competitive salary.\n * Paid holidays.\n * Stock options.\n * Apple or Linux equipment.\n * Occasional travel and accommodation in London.\n \n\n\n\nHow to apply\nIf you think you're a good fit for this role, send us a covering email along with your CV - we want to get to know you! Please also include your availability.\nAs part of our hiring process, we ask candidates to do a coding test, which is based on our in-production set of technologies. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Engineer, Amazon, Elasticsearch, Java, Cloud, Python, API, Travel and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Hotjar and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 410 3 years ago
**Note: although this is a remote position, we are currently only seeking candidates in time zones between UTC-2 and UTC+7.**\n\n\n\nHotjar is looking for a driven and ambitious DevOps Engineer with Big Data experience to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 7500 API requests per second, delivers over a billion pieces of static content every week and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity. As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies. \n\n\n\nThis is an excellent career opportunity to join a fast growing remote startup in a key position.\n\n\n\nIn this position, you will:\n\n\n\n- Be part of our DevOps team building and maintaining our web application and server environment.\n\n- Choose, deploy and manage tools and technologies to build and support a robust infrastructure.\n\n- Be responsible for identifying bottlenecks and improving performance of all our systems.\n\n- Ensure all necessary monitoring, alerting and backup solutions are in place.\n\n- Do research and keep up to date on trends in big data processing and large scale analytics.\n\n- Implement proof of concept solutions in the form of prototype applications.\n\n\n\n\n\n \n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Amazon, Elasticsearch, Python, Engineer, Full Time, Web Developer, DevOps, Cloud and API jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nValletta
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Hotjar and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 410 3 years ago
**Note: although this is a remote position, we are currently only seeking candidates in time zones between UTC-2 and UTC+7.**\n\nHotjar is looking for a driven and ambitious DevOps Engineer with Big Data experience to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 7500 API requests per second, delivers over a billion pieces of static content every week and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity. As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies. \n\nThis is an excellent career opportunity to join a fast growing remote startup in a key position.\n\nIn this position, you will:\n\n- Be part of our DevOps team building and maintaining our web application and server environment.\n- Choose, deploy and manage tools and technologies to build and support a robust infrastructure.\n- Be responsible for identifying bottlenecks and improving performance of all our systems.\n- Ensure all necessary monitoring, alerting and backup solutions are in place.\n- Do research and keep up to date on trends in big data processing and large scale analytics.\n- Implement proof of concept solutions in the form of prototype applications. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Elasticsearch, Python, Engineer, Full Time, Web Developer, Cloud and API jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSaint Julian's
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Analytics SEO Limited and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 403 3 years ago
Are you a PHP or Python Software Developer with a magic touch?\n\nWe're on a quest to build a world-class team of exceptional software engineers who are dedicated to devising exceptional solutions to hairy Big Data challenges in the online marketing world.\n\nWe believe we can really stretch you to get the very best out of yourself with the nature of our projects. So if you're entertaining the idea of a new career challenge in 2015 then get in touch!\n\nWe firmly believe in using the right tool for the job, so you will be exposed to Symfony, Doctrine, Python, Tornado, Java, Hadoop, Elastic Search, Cassandra,Toku DB, magic wand (whatever gets the job done properly).\n\nWe constantly try to improve ourselves and follow industry best practices, so code reviews, CI, automation and knowledge sharing sessions (every Thursday we have a lunch of wisdom) are all part of the picture.\n\nYou will be part of an integrated team, which means you will be responsible for your code right through to production and technical support. \n\nWe're looking for someone who fits most of the following:\nโข Experience with a modern PHP framework and MVC (Symfony 2 preferred)\nโข Excellent PHP and Javascript OOP skills\nโข Experience developing REST APIs\nโข Experience in writing optimized SQL queries\nโข TDD/BDD\nโข Several years experience in customer-facing software development\nโข Exposure to Big Data solutions\n\nInterested in finding out more about the company and career opportunities?\n\nCome and chat to us and show us the tricks up your sleeve!\n \n\n#Salary and compensation\n
$40,000 — $50,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLondon
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.