\nWe're looking for a Front-End Developer with more than 4 years of experience in application development with React. The ideal candidate would have experience building and deploying modern web applications.\n\n\nCandidates must have familiarity with the Software Development life cycle and be able to employ best practices such as TDD, Software Version Control (Git/GitHub) in an Agile environment.\n\n\nHere's our app development stack:\nยท Frontend: React / TypeScript\nยท Backend: FastAPI / SQLAlchemy / postgreSQL / Snowflake / Elasticsearch\nยท Platform: AWS / GitHub / Docker / ECS / GitHub Actions / Sentry / AuthO\n\n\n\nResponsibilities:\n* Build the front-end of data-centric web applications, including the development and maintenance of generic reusable Ul components\n* Working with UX, Back-end Developers, Data Engineering and the Product team to design and deliver efficient code which implement business requirements to specification\n* Participate in software architecture decisions\n* Provide Application Documentations and technologies, employing creative and strategic thinking to navigate challenges and capitalize on opportunities.\n\n\n\nQualifications:\n* 4+ years of web application development experience\n* Familiarity with React - Hooks - State management (we use Zustand)\n* Familiarity with TypeScript\n* Familiarity with GitHub\n* Test-driven development in an agile environment\n* Strong communication skills\n* Innovative and collaborative\n\n\n\nOther Tech:\n* Semantic-Ul/Material\n\n\n* HighCharts / React Charts\n* Jest/Vitest\n* Testing Library\n* Auth0\n* GitHub Actions\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, React and Docker jobs that are similar:\n\n
$70,000 — $115,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Daniel J Edelman Holdings and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 1 year ago
\nWeโre looking for a Front-end Engineer with 2-3 years of experience in application development with React. The ideal candidate would have experience building and deploying modern web applications. \n\n\nCandidates must have familiarity with the Software Development life cycle and be able to employ best practices such as TDD, Software Version Control (Git/GitHub) in an Agile environment.\n\n\nHereโs our app development stack:\nยท Frontend: React / TypeScript\nยท Backend: FastAPI / SQLAlchemy / postgreSQL / Snowflake / Elasticsearch\nยท Platform: AWS / GitHub / Docker / ECS / GitHub Actions / Sentry / Auth0\n\n\n\nResponsibilities:\n* Build the front-end of data-centric web applications, including the development and maintenance of generic reusable UI components\n* Working with UX, Back-end Developers, Data Engineering and the Product team to design and deliver efficient code which implement business requirements to specification \n* Participate in software architecture decisions\n* Provide Application Documentation\n\n\n\nRequired Qualifications\n* 2+ years of web application development experience\n* Familiarity with\n\n o React โ Hooks โ State management (we use Zustand) o TypeScript o GitHub\n* Test-driven development in an agile environment\n* Strong communication skills\n* Innovative and collaborative\n\n\n\n\n* Semantic-UI/Material\n* HighCharts / React Charts\n* Jest/Vitest \n* Testing Library\n* Auth0\n* GitHub Actions\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, React, Docker and Engineer jobs that are similar:\n\n
$57,500 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMadrid
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
\nSenior QA & DevOps Engineer (Remote)\n\n\nPremia is a decentralized options platform connecting traders and liquidity providers of all backgrounds. Offering non-custodial options to hedge, speculate, or earn yield on your digital assets. Premia offers first of its kind automated market maker solutions in the DeFi space for Options Contracts through our use of Smart Liquidity Pools and Dynamic Pricing. Premia enables best-in-class pricing based on market volatility, providing fully-featured peer-to-pool trading and capital efficiency to DeFi options.\n \nWe are one of the smallest and most impactful teams in crypto. We are a globally distributed organization, with all positions being fully remote.\n \nWeโre looking for a passionate, self-motivated engineer to help us build the next generation of financial products. As a dedicated Development Operations hire, you will gain ownership over our existing suite of web products, as well as the ability to influence the creation, design, and execution of future products. You will be responsible for ensuring a consistent, high-quality user experience across trading interfaces, data-heavy analytics pages, documentation portals, subgraph on The Graph and more.\n\n\nWho are you?\n\n\nA senior-level quality assurance or testing engineer with a focus on web applications who is also a crypto-native. \n\n\nYou have extensive experience designing and executing manual and automated tests. You are proficient with Javascript/Typescript, React.js, testing libraries such as Jest/Mocha, automated front-end testing tools like Playwright/Puppeteer, and CI/CD tools such as Jenkins/Github Actions.\n\n\nYou have experience and are culturally aligned with fast-moving small teams. You have worked at remote (globally distributed) startups before. You are self-driven, are comfortable wearing many hats and can ship patches and features swiftly when needed. You can identify company priorities, own them, and iterate quickly to ship the best solution.\nYou can write and speak fluent English and have great communication skills.\n\n\nResponsibilities\nAs a Senior QA + DevOps Engineer you will work with the Front-end team to: \n-Create and document automated and manual test plans and procedures \n-Configure and set up testing environments \n-Implement, run, and monitor automated tests \n-Help polish our development cycle \n-Continuously improve our existing CI/CD pipelines \n-Write scripts in the language of your choice that can help us improve the QA process\n\n\nRequirements\n-At least 3 years of React + QA experience\n-Passion for web3 / DeFi\n-Extensive experience in designing and executing manual and automated tests\n-Extensive experience with JS/TS + React\n-Extensive experience with automation tools (Playwright preferred)\n-Experience with CI/CD tools (Github Actions preferred)\n-Fluent with different operating systems (Linux, MacOS etc.)\n-An entrepreneurial nature, willing to take ownership and work in areas beyond your comfort zone\n-Excellent communication & Escalation Habits\n-(Nice to have) Previous experience with web3.js or ethers.js libraries.\n-(Nice to have) Cloud infrastructure / Docker experience \n\n\n\n\n\nBenefits\nWork from anywhere (Remote first), Flexible working hours, Flexible vacation policy, Competitive Salary + Token bonus (portion or all can be paid in Crypto). Premia is committed to a diverse and inclusive workplace and is an equal opportunity employer. We do not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.\n\n\nPremia welcomes all qualified persons to apply. Compensation will be competitive and commensurate with experience. This is a full time role.\n\n\nTo find out more you can view their website at https://premia.finance/ \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to macOS, Web3, Defi, React, Docker, Testing, DevOps, Cloud, Senior and Engineer jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
# **About SipScience** \n\nSipScience offers a revolutionary data analytics and marketing platform delivering never-before-seen consumer insights from inside bars & restaurants. \n\n**Our Offerings:**\n\nSip App - Our Android and iOS phone app that provides contactless payments in select bars & restaurants, as well as a Member Loyalty Program that allows users to get 50% off of drinks! \n\nSipSync - a marketing analytics app with a revolutionary approach: it uses a limited panel of consumers representative of the overall alcoholic beverage market to obtain a real-time picture of habits nationwide in every bar and restaurant category. This unique panel is leveraged to gather insights with a never-seen-before level of granularity that can be accessed via a simple POS extension. SipScience's analytics dashboard can help bars and restaurants tailor their communication to their customer base, improve loyalty, and optimize pricing.\n\n**Why work here?**\n\n_We are a tech startup that believes people are the most important asset to an organization. We have a small, passionate team that is excited about our product offerings and the insights we provide. We know what we are working on hasnโt been done before and itโs very exciting to solve problems on the edge of a new frontier in tech._\n\n\n# **Who We Are Looking for** \n\nSipScience is looking for an innovative senior-level engineer to join our team as the **Senior Mobile Developer.** The Senior Devโs primary responsibility is to own, maintain, and scale SipScienceโs stack.\n\n**Primary Responsibilities**\n\n\n\n* Design and architect **_solutions_** for high-level goals and objectives\n* **_Actively_** contribute to code reviews and pull requests\n* Being able to **_design mobile applications_** from top to bottom, troubleshooting and fixing tough problems, hitting deadlines, and knowing when to push back on requirements\n\n**Job Requirements**\n\n\n\n* _Minimum_ five (5) years of experience programming in a professional environment\n* BA/BS in Computer Science **OR** Demonstrated expertise in programming to include a solid foundation in computer science\n* Culture Leader: Commitment to **_Agile methodologies _**(among others, **_collaboration_** and **_transparency_** are important for us, weโre not perfect at following it, but we aim at getting better every sprint)\n* **_GoLang_** programming expertise (_Minimum_ Three (3) years)\n* Experience in **_Docker & AWS_** (_Minimum_ Three (3) years)\n* Must have **React Native** (Javascript/Typescript) expertise(Minimum Three (3) years)\n* Comfortable with working in a startup environment (weโre a small team, we do our best to work as rationally as possible, but weโre not perfect!)\n* Strong problem solving and **_debugging skills_** with direct experience within mobile application environments, including monitoring, alerting, and distributed tracing for IOS & Android\n* Have a good sense of organization, the **_ability to set expectations_** and keep working on a timeline, with the ability to document, communicate and explain technology decisions and directions\n\n**Technologies we are currently using**\n\n\n\n* **_Deployment_** to Google Play and App Store\n* **_Our tech stack: _**React Native, JavaScript, Go, Docker, PostgreSQL, AWS, Python, SQL, Stripe, Firebase, Miscellaneous third-party APIs such as Twilio SendGrid, segment, etc.\n* **_Our app deals with_** data structures, algorithms, object-oriented software design, and working with cloud-based distributed systems\n\n**Technologies we expect to use in the future**\n\n\n\n* Node.js, AngularJS, Google Cloud, Kubernetes, Tensorflow, Omnivore API, Big Query, SuperSet\n\n**Soft Skills/Characteristics we love**\n\n\n\n* Extreme passion for **_cracking problems_** **_efficiently_** and **_elegantly_** \n* Dedication for mentoring & **_growth of teammates_**\n* Strong **_organizational_** and **_communication skills_**, ability to present your ideas to **_both_** **_technical and non-technical audiences_**\n* Natural tendency to be **_curious, positive, and creative_**\n* Team player who collaborates well with others **_(donโt be a dick!)_**\n\n **Location** \n\n\n\n* This is a remote position\n\n\n \n\nPlease mention the word **ILLUSTRIOUS** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xOTg=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$10,000 — $10,000/year\n
\n\n#Location\nUnited States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xOTg=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Segment and want to re-open this job? Use the edit link in the email when you posted the job!
# Lead Product Engineer - Core Experience\nSegment is building the future of how companies manage their constantly increasing volume of customer data. We help our customers collect data from a variety of sources, combine and understand that data, and ultimately act on it to give their users a better experience.\nImagine you want to answer a question that is core to your business โ maybe you changed the pricing on your product and you want to understand if thatโs driving revenue or creating churn and customer confusion. In order to properly answer that question, you would need data from your payment processor, your CRM, and telemetry data from your application. In the past, business teams have had to wait for developers to build ETL pipelines to move data from one place to another. This is painful, time-consuming, and doesnโt keep up with the pace of the customer needs. Segment allows you to get all of this data in one place, automatically, and start using it immediately rather than spending time building data pipelines.\n## Who we are:\nWe're a small distributed team of full-stack engineers based in San Francisco, Vancouver and the world ๐ who love to ship high-quality code.\nFrom collecting data through [analytics.js](https://github.com/segmentio/analytics.js), to building powerful tools for data governance, to implementing algorithms that can handle complex billing scenarios at scale, to optimizing Sign Up conversion, the Product Engineering team is focused on creating fantastic user experiences.\nWe're looking for talented engineers that are passionate about building world-class experiences that delight our customers.\n## How we work:\n- ๐ We enjoy building UIs in React so much that we created our own internal components library.\n- ๐ ๏ธ We believe in using the best tool for the job. We write customer facing features using React, NodeJS and GraphQL. Our write-heavy traffic services are written with Go and leverage multiple data storage solutions.\n- ๐ข We deploy our code multiple times per day. We "semver" everything :)\n- ๐ค We love conferences. (An engineer spoke in 4 different countries last year!)\n- ๐ฏ We love open source: https://open.segment.com\n- ๐ Weโre proud of the code we write, but weโre not dogmatic about methodologies or techniques. We believe building the "right thing" is more important than building things "right".\n\n## Who we're looking for:\nYou can turn complex business requirements into working software that our customers love to use.\n- You're proud of the code you write, but you're also pragmatic.\n- You know when it is time to refactor, and when it's time to ship.\n- You're focused, driven and can get challenging projects across the finish line.\n- You're empathetic, patient and love to help your teammates grow.\n- You have experience running apps in production and take software engineering practices seriously. You write meaningful tests and understand the value of great logging, proper monitoring and error tracking.\n\n## A few projects you could be working on:\n- We collaborated closely with our BizOps and Design team to rebuild all parts of our billing experience, from the customerโs first visit, to building a pricing simulator tool, to implementing algorithms that can handle complex billing scenarios at our scale.\n- Weโre building an [open-source version](https://github.com/segmentio/evergreen) of our UI library that saves our engineers multiple hours of work every week. Think pixel-perfect implementations by default ๐ฑ ๐จ.\n- We used a [HLL](http://antirez.com/news/75) to scale an analytics tool that handles thousands of requests / sec.\n- We're building powerful tools that help our customers protect the integrity of their data, and the decisions they make with it.\n\n## Requirements:\n- You can write both client side and server side JavaScript using the latest APIs and language features.\n- You have some familiarity with Golang or are excited to learn it.\n- Minimum of 3 years of industry experience in engineering or some cool projects on GitHub you think we'll love to check out.\n- You provide a deep understanding of the complexities involved in writing large single-page applications.\n- You show evidence of exposure to architectural patterns of high-scale web application (e.g., well-designed APIs, high volume data pipelines, efficient algorithms.)\nWe are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. \n\nPlease mention the words **SWITCH TURTLE AUTO** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xOTg=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to JavaScript, React, Node, Golang, Engineer, GraphQL, Docker and Executive jobs that are similar:\n\n
$65,000 — $120,000/year\n
\n\n#Benefits\n
๐ Distributed team\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.