\nWeโre a highly skilled team of software engineers who are building an awesome product and moving fast. We value people who take initiatives, and empower everyone at Klue to make a real change in the product or processes. \n\n\nWe are looking for a Senior Backend Engineer to work with our Consumer team to deliver high-quality products in the most efficient way.\n\n\n๐กFAQ \n\n\nQ: Klue who?\nA: Weโre Klue and from a technical perspective, Klueโs mission is to descale huge amounts of data to the human level, so people can process it and make use of it. Klue is that trusted intermediary, right now itโs proven for sales enablement, but tomorrow itโs all teams enablement.\n\n\nQ: What level of experience are we looking for?\nA: Right now we are looking for a Senior-level Back-end Engineer. \n\n\nQ: What is our development team working on?\nA: As part of our backend team, we are concerned with data storage and retrieval and the infrastructure to enable that. Hereโs what our development team is working on and the opportunity for motivated Software Engineers to dig into, alongside us:\n- Big Data - lots of data \n- Ingesting thousands of news articles, web pages, marketing and sales data points per day. The challenge is indexing them for a long period of time and making them searchable and ready for different analysis.\n- Expanding our Rails REST API and offering public APIs to enable integrations.\n- Architect infrastructure for a scalable, resilient and robust service. We are migrating from a monolith architecture to K8S-hosted microservices. \n\n\nQ: What tech stack is this team working with?\nA: Ruby (Rails), Python (Flask), PostgreSQL, Elasticsearch, Redis, GCP, AWS, Tensorflow, Keras, Docker, Kubernetes.\nWe code review all changes, continuously integrate, pay down technical debt, and aim for high automated test coverage. We love microservices and, while we mostly use Python, Ruby, Google Cloud Platform, Linux, JavaScript, and React, new services can be built using whatever tools make sense to get the job done and support our game-changing innovation.\n\n\nQ: Are you HYBRID FRIENDLY ๐คฉ ?\nA: YES! Hybrid. Best of both worlds (remote & in-office)\nOur main Canadian hubs are in Vancouver and Toronto, and most of our teams are located in EST and PST.\nYou and your team will be in office at least 2 days per week.\n\n\n\nQ: What skills do you bring? \n* Expertise in at least one of the general programming languages, with a strong preference for Ruby on Rails\n* Expertise in relational databases such as PostgreSQL or MySQL\n* Experience in designing REST APIs\n* Experience using NoSQL databases such as Elasticsearch or MongoDB is a plus\n* Experience using Docker, Kubernetes, AWS, GCP is a plus\n* Bonus if you have Data Engineering interest and experience; ETL Pipelines, Snowplow, Snowflake, Big Query Redshift, Airflow, or equivalent.\n\n\n\nQ: What motivates our current team right now?\n* The type of work. Challenging, stimulating and meaningful work. New and relevant tech stack. We know engineers/developers especially want to work on hard technical and innovative problems.\n* The inspiration from skilled and proven leaders.\n* Entrepreneurial fingerprints on what will be a future billion dollar company anchored in Canada.\n* Culture, team, and the work environment.\n* High degree of autonomy and accountability.\n* High degrees of transparency and high quality communication.\n\n\n\nQ: What are the people at Klue like?\n* Builders\n* Intellectually Curious\n* Ambitious\n* Objective Oriented\n* Check us out!\n\n\n\nQ: What about total compensation & benefits?\n* Benefits. We currently have extended health benefits starting on your 1st day.\n* Time off. Take what you need. We want the team to prioritize wellness and avoid burnout. Vacation usually falls into 3 categories: recharging, life-event, & keeping a work-life balance. Just ensure the required work gets done and clear it with your team in advance. You need to take at least two weeks off every year. The average Klue team member takes 2-4 weeks of PTO per year.\n\n\n\n\n$150,000 - $180,000 a yearWe gather compensation benchmarking data across the BC & Canadian Tech Industry and use that data to build a range for our current team and future talent. Your exact salary is determined by experience level, skill, capabilities, whether or not you select options, and internal pay parity.\nIf you feel like this role is a great fit and have questions about comp, get in touch and weโre happy to discuss further. There is always an ongoing conversation around compensation.\n\n\nโฌ๏ธ โฌ๏ธ โฌ๏ธ โฌ๏ธ โฌ๏ธ\n\n\n\n\n\n\n\n\nLastly, we take potential into consideration. An equivalent combination of education and experience may be accepted in lieu of the specifics listed above. If you know you have what it takes, even if thatโs different from what weโve described, be sure to explain why in your application. Reach out and letโs see if there is a home here for you now or in the future.\n\n\nWeโve made a commitment to support and contribute to a diverse environment; on our teams and in our community. Weโre early in our journey; we've started employee led resource groups, committed to Pay Up For Progress, and use success profiles for roles instead of 'years of experience'. We continue to scale our efforts as Klue grows. Weโre proud to be an equal opportunity employer and have dedicated that commitment to our current and future #kluecrew. During the interview process, please let us know if there is anything we need to make more accessible or accommodate to support you to be successful.\n\n\nAll interviews will be conducted via video calls. We work in a hybrid model of WFH (remote) and in-office. Weโre excited to meet you and in the meantime, get to know us:\n\n\n ๐ Pay Up For Progress & 50 - 30 Challenge & Klue Blog\nโ โ Win-Loss Acquisition (2023)\n๐ ฐ๏ธ Series A (2020)\n๐ Series B (2021)\n๐ Culture, culture, culture! \n๐ง Winning as Women & Competitive Enablement Show\n๐ Glassdoor\n๐ About Us\n๐ฅ Twitter\n๐ธ Instagram\nโ๏ธ LinkedIn\n๐ฆ Wellfound (AngelList) \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Video, Cloud, NoSQL, Ruby, API, Senior, Marketing, Sales, Engineer and Backend jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Tesla and want to re-open this job? Use the edit link in the email when you posted the job!
**The Role**\n\nAs a member of the Autopilot AI Tooling team you will design and implement a diverse set of tools that power our machine learning workflows. You will work closely with some of the best, world-class AI researchers and machine learning experts towards Teslaโs goal of a self-driving future. \n\nThe AI Tooling team is a central part of Autopilot AI. The systems you build will have a large impact on the entire lifecycle of model development. This includes data processing, data discovery, annotation, and visualization. This also includes tools that help us automate the entire workflows of training, validation, and productionization of our Autopilot neural networks. \n\nAs an engineer on the AI Tooling team, you bring top-notch software engineering skills and can contribute to our tooling systems immediately. Knowledge of machine learning, computer vision, or neural networks is a plus but not required. That can be learned on the job as you work with our machine learning experts. A strong candidate for the AI Tooling team will either be an excellent software generalist, or someone who is exceptionally strong in either backend or frontend engineering.\n\n**Responsibilities**\n\nDesign and implement large-scale data processing pipelines that handle a diverse set of Autopilot related data such as images, sensor inputs, and human labels.\nDesign and implement tools, tests, metrics, and dashboards to accelerate the development cycle of our model training.\nWork closely with frontend engineers to seamlessly integrate with backend systems.\nWork closely with AI researchers to support evolving research projects and implement new production features.\n\n**Requirements**\n\nBS/MS in Computer Science or related field, or equivalent industry experience.\nStrong knowledge of at least one programming language related to data engineering, such as Python or Scala is required.\nExperience with scientific computing libraries such as numpy, pandas, or scikit-learn is a plus.\nExperience with big data systems such as Spark, Hadoop/MapReduce is a plus.\nExperience with cloud computing such AWS EC2/S3/RDS is a plus.\nExperience with ElasticSearch or other scalable search systems is a plus.\nExperience with real-time data processing is a plus.\nKnowledge of machine learning, computer vision, or neural networks is a plus but not required.\nTesla participates in the E-Verify Program\n\nTesla is an Equal Opportunity / Affirmative Action employer committed to diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, national origin, disability, protected veteran status, gender identity or any other factor protected by applicable federal, state or local laws.\n\nTesla is also committed to working with and providing reasonable accommodations to individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the interview process. \n\nPlease mention the words **FOREST SHED CERTAIN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nPalo Alto, Austin or Remote
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Jungle Scout and want to re-open this job? Use the edit link in the email when you posted the job!
At Jungle Scout, we are on a mission to empower entrepreneurs and brands to grow successful e-commerce businesses, and we provide the industry-leading data, powerful tools, and resources they need.\n\nThe role:\n* Do you get excited working with talented engineers, leading them to ship product features and enhancements, and helping them thrive in their career?\n* Do you define a great day as getting sh*t done and having fun working with your team?\n* Do you have a thirst for breaking down complex initiatives into achievable project plans?\n* Do you thrive when you're contributing to a high-performing, humble team?\n\nAmazing, then youโre the type of person weโre looking for!\n\nWeโre growing and we are looking to add a Senior Backend Engineer to the Engineering team focused on building Jungle Scoutโs enterprise SaaS product. \n\nWhere would this person be located? Great question! Weโre a remote-first company and hope to hire this Senior Software Engineer Team **anywhere between the EST - PST timezone**\n\nInterested in learning more? Letโs get into the details: \n\n**What you will be doing:**\n\nArchitect and build: \n* Highly scalable, fault-tolerant, elastic, and secure services\n* Large scale distributed systems\n* Applications that are a composition of independent services (microservices)\n\nMake recommendations around:\n* Technologies to be used for new services and applications \n* Improvements on existing services and applications\n\nScale, maintain and improve:\n* Existing codebases and system infrastructures\n* Independent services using CI/CD and multiple environments stages (e.g., staging vs. production) to ensure rapid delivery while maintaining high quality and availability\n\nParticipate and contribute in:\n* Leading the technical architecture and delivery of complex projects, interfacing with product, design, and multiple engineering teams\n* Helping product managers with project planning and facilitating the Scrum process\n* Ongoing improvement of engineering practices and procedures\n\n**Who you are:**\n\n* Youโve done this before. \n* Youโre an expert with one or more modern programming languages (Ruby, Javascript, Python, Java), technologies, coding, testing, debugging, and automation techniques\n* Have built enterprise-level services with popular backend frameworks (e.g., Ruby on Rails, NodeJS, Spring, Django, Flask, etc)\n* You have experience building data-driven systems that have high availability, optimize for performance, and are highly scalable\n* Youโre experienced with modern SQL and NoSQL databases, know when to use each, and can build performant systems on top of each\n* Youโre an AWS Cloud Wizard.\n* Youโre an AWS cloud ninja and you have experience building cloud-native services at scale\n* Experience working with core AWS services like EC2, RDS, DynamoDB, Elasticsearch, ElasticBeanstalk, Lambda, Cloudwatch, SQS, Kinesis and SNS.\n* Youโre a master communicator & passionate mentor. \n* Fluent in both written & verbal English to easily chat with our North American teams. \n* Able to communicate effectively, clearly, and concisely on both technical and non-technical subjects. \n* Take any chance you can get to share knowledge with your team. \n* Contribute to the teamโs documentation and mentor teammates in an open, respectful, flexible, empathetic manner. \n* Do not shy away from taking and giving feedback.\n* Youโre autonomous. \n* Successfully execute large multi-person projects and well-defined initiatives from definition through to the end.\n\n**Working at Jungle Scout\n**\n[Check us out!](https://www.junglescout.com/jobs/)\n* The BEST team. \n* Remote-first culture.\n* International Retreats.\n* Access to Jungle Scout tools & experts.\n* Performance Bonus. \n* Flexible Vacation. \n* Comprehensive Health Benefits & Retirement Program. \n\n**We prioritize Diversity, Equity, and Inclusion \n**\nAt Jungle Scout, we hire great people from a wide variety of backgrounds, not just because itโs the right thing to do, but because it makes our company stronger. \n\nJungle Scout is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.\n\n*All offers of employment at Jungle Scout are contingent upon clear results of a comprehensive background check. Background checks will be conducted on all final candidates prior to start date.* \n\nPlease mention the words **MISERY ALARM PARTY** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$10,000 — $200,000/year\n
\n\n#Location\nNorth America and South America
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for InReach Ventures and want to re-open this job? Use the edit link in the email when you posted the job!
InReach is changing how VC in Europe works, for good. Through data, software and Machine Learning, we are building an in-house platform to help us find, reach-out to and invest in early-stage European startups, regardless of the city or country theyโre based in.\n\nWe are looking for a back-end developer to continue the development of InReachโs data services. This involves: \n* Cleaning / wrangling / merging / processing the data on companies and founders from across Europe\n* Building data pipelines with the Machine Learning engineers\n* Building APIs to support front-end investment product used by the Investment team (named DIG)\n\nThis role will involve working across the stack. From DevOps (Terraform) to web scraping and Machine Learning (Python) all the way to data pipelines and web-services (Java) and getting stuck into the front-end (Javascript). Itโs a great opportunity to hone your skills and master some new ones.\n\nIt is important to us that candidates be passionate about helping entrepreneurs and startups. This is our bread-and-butter and we want you to be involved.\n\nInReach is a remote-first employer and we are looking to this hire to help us become an exceptional place to work for remote employees. Whether you are in the office or remote, we are looking for people with excellent written and verbal communication skills.\n\n### Background Reading:\n* [InReach Ventures, the 'AI-powered' European VC, closes new โฌ53M fund](https://techcrunch.com/2019/02/11/inreach-ventures-the-ai-powered-european-vc-closes-new-e53m-fund/?guccounter=1)\n* [The Full-Stack Venture Capital](https://medium.com/entrepreneurship-at-work/the-full-stack-venture-capital-8a5cffe4d71)\n* [Roberto Bonanzinga starts InReach Ventures with DIG platform](https://www.businessinsider.com/roberto-bonanzinga-starts-inreach-ventures-with-dig-platform-2015-11?r=US&IR=T)\n* [Exceptional Communication Guidelines](https://www.craft.do/s/Isrjt4KaHMPQ)\n\n## Responsibilities\n\n* Creatively and quickly coming up with effective solutions to undefined problems\n* Choosing technology that is modern but not hype-driven\n* Developing features and tests quickly with good, clean code\n* Being part of the wider development team, reviewing code and participating in architecture from across the stack\n* Communicating exceptionally, both asynchronously (written) and synchronously (spoken)\n* Helping to shape InReach as a remote-first organization\n\n## Technologies\n\nGiven that this position touches so much of the stack, it will be difficult for a candidate that only has experience in Python or only in Java to be successful in being effective quickly. While we expect the candidate to be stronger in one or the other, some professional exposure is required.\n\nIn addition to the programming skills and the ability to write well designed and tested code, infrastructure within modern cloud platforms and sound architectural reasoning are expected.\n\nNone of these are a prerequisite, but help:\n* Functional Programming\n* Reactive Streams (RxJava2)\n* Terraform\n* Postgres\n* ElasticSearch\n* SQS\n* Dynamodb\n* AWS Lambda\n* Docker\n* Dropwizard\n* Maven\n* Pipenv\n* Javascript\n* React\n* NodeJS\n\n## Interview Process\n* 15m video chat with Ben, CTO to find out more about InReach and the role\n* 2h data pipeline technical test (Python)\n* 2h web service technical test (Java)\n* 30m architectural discussion with Ben, talking through the work you did\n* 2h interview with the different team members from across InReach. Weโre a small company so itโs important we see how weโll all work together - not just the tech team!\n \n\nPlease mention the words **TALENT AUTHOR GENIUS** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$55,000 — $70,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
\n\n#Location\nUK or Italy
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.\n\nElastic’s Cloud product allows users to build new clusters or expand existing ones easily. This product is built on Docker based orchestration system to easily deploy and manage multiple Elastic Clusters.\n\nWhat You Will Do:\n\n\n* Implement features to manage multiple Elasticsearch Clusters on top of our orchestration layer\n\n* Own and manage services that supports functionality like self-service billing, forecasting and customer communications\n\n* Add features to the backend services in Python that integrate Postgres DB, ZooKeeper and Elasticsearch data stores\n\n* Collaborate with other teams in Cloud Infrastructure to develop scalable, automated solutions that drive our SaaS business\n\n* Grow and share your interest in technical outreach (blog posts, tech papers, conference speaking, etc.)\n\n\n\n\nWhat You Bring Along\n\n\n* Experience with Python as a programming language\n\n* Experience writing SQL queries\n\n* Experience integrating with systems such as Salesforce, Marketo, etc is a plus\n\n* Experience or interest in working on SaaS billing or metering systems is a plus\n\n* Good organizational skills and the ability to own and track projects which are critical to the business\n\n* You care deeply about resiliency of the services and quality of the features you ship\n\n* Experience with public Cloud environments (AWS, GCP, Azure, etc.)\n\n* A self starter who has experience working across multiple technical teams and decision makers\n\n* You love working with a diverse, worldwide team in a distributed work environment\n\n\n\n\nAdditional Information:\n\n\n* Competitive pay and benefits\n\n* Equity\n\n* Catered lunches, snacks, and beverages in most offices\n\n* An environment in which you can balance great work with a great life\n\n* Passionate people building great products\n\n* Employees with a wide variety of interests\n\n* Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Python, Developer, Digital Nomad, Elasticsearch, SaaS, Apache and Backend jobs that are similar:\n\n
$67,500 — $125,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.