This job post is closed and the position is probably filled. Please do not apply. Work for Civiqs and want to re-open this job? Use the edit link in the email when you posted the job!
LOCATION: Remote within the U.S.\n\nCiviqs is the leading online scientific polling platform and a division of Kos Media LLC. Civiqs has been conducting surveys online since 2014. Every day, Civiqs surveys thousands of people across the United States on politics, culture, and current affairs. With years of daily responses on a huge array of questions, Civiqs maintains one of the largest databases of public opinion in the United States. The scale and quality of Civiqsโ public opinion data, and its online survey panel, is unmatched in the survey industry.\n\nWe are hiring an experienced and results-driven Senior Software Development Engineer to join our talented remote engineering team. As Senior Software Development Engineer, you will ensure that the research platform is operating smoothly and accurately, and that new features are delivered on time and to specification. You will work alongside researchers and data scientists to help shape the Civiqs survey application as we expand our development team and build new research products and features.\n\nWe have an energized team of survey researchers with diverse backgrounds and skill sets. If youโre interested in a position that offers more than just a technical challenge, weโd like to hear from you. The ideal candidate for this position will love data as much as we do! You are an innovator, collaborator and have an innate dedication to excellence. You are self-motivated, efficient, and capable of delivering results with limited guidance. Come join our team and guide the future of public opinion research using the Civiqs panel!\n\nResponsibilities\n\n- Design, architect, code, and maintain application solutions for Civiqs using industry best practices\n- Maintain and optimize Python / PyMC modelling application, and data science pipeline\n- Have a deep understanding of our frontend and backend systems, infrastructure, cloud services, and dev ops automation tools\n- Clearly and precisely communicate technical issues and establish day to day priorities with other developers and non-technical stakeholders\n- Mentor other developers on the team, through pairing and direct feedback\n- Kick off and lead technical team decision making in collaboration with research and engineering management\n- Quickly identify and address bugs, anticipate run time issues involving code changes that may affect extremely large data sets\n- Write detailed automated test cases for new features\n- Work collaboratively with the engineering team and lead complex releases often involving multiple systems and large data migrations\n- Partner with research team members to ensure that documented requirements meet the teamโs needs, and ensure that development priorities are aligned\n- Remain current on test, development, and deployment best practices\n\nExperience\n\n- 8+ years experience in professional software development using Python and/or Ruby and Javascript\n- Experience with Python data science, PyMC, Pandas, Spark, etc. preferred\n- Production experience with ReactJS would be a bonus\n- 4+ years experience working on an Agile, Kanban, or similar collaborative environment\n- Experience working with fully remote teams preferred.\n\nQualifications\n\n- Experience maintaining and developing new features in large and complex codebases\n- Working knowledge in systems or operations at OS and basic networking levels\n- Ability to write, run, and optimize raw SQL queries in MySQL or PostgreSQL; knowledge of other data storage a plus\n- Experience with containerized application development using Docker is preferred\n- Experience measuring system performance and implementing security best practices\n- Strong track record developing software using automated testing tools\n- Awareness of typical programming errors and the unexpected things users do whether accidentally or maliciously\n- Ability to analyze and debug distributed data processing systems\n- Motivated, organized, and self-directed technical leader\n- Critical thinker with thirst for knowledge and continuous improvement\n- Ability to work autonomously, take ownership, and deliver a quality software experience\n- Excellent communication skills and comfortable talking with team members at all levels\n- Willingness to become a Civiqs platform expert\n\nSALARY RANGE: $130,000 - $165,000\n\nThis position is a 40 hour/week, full-time exempt position and reports to Civiqsโ Engineering Manager. Candidates must be legally eligible to work in the United States. The position offers a flexible work environment, the ability to work remotely or from home, competitive salary, excellent benefits including: full medical, dental and vision benefits, optional 401K with a company match, remote worker stipend, a generous vacation package, traumatic grief leave, a professional development stipend, as well as employer-paid maternity/family leave. Our organizational commitment to personal growth and work-life balance reduces churn and encourages a very rewarding long term position.\n\nAt Civiqs, we believe that the diversity of ideas, experiences and cultures that our employees contribute to our organization help us be more effective in our work, and we are proud to be an inclusive and equal-opportunity workplace. The atmosphere in our office is energized by the dayโs news events, and people united by common cause. Weโre a company that loves learning and supports growth and training for all our employees.\n\nWomen, people of color, and LGBTQ+ individuals are strongly encouraged to apply. \n\nPlease mention the word **HOT** when applying to show you read the job post completely (#RMTguOTcuMTQuODc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$130,000 — $160,000/year\n
\n\n#Location\nUnited States
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Civiqs and want to re-open this job? Use the edit link in the email when you posted the job!
LOCATION: Remote within the U.S.\n\nCiviqs is the leading online scientific polling platform and a division of Kos Media LLC. Civiqs has been conducting surveys online since 2014. Every day, Civiqs surveys thousands of people across the United States on politics, culture, and current affairs. With years of daily responses on a huge array of questions, Civiqs maintains one of the largest databases of public opinion in the United States. The scale and quality of Civiqsโ public opinion data, and its online survey panel, is unmatched in the survey industry.\n\nWe are hiring an experienced and results-driven Software Development Engineer to join our talented remote engineering team. As Software Development Engineer, you will ensure that the research platform is operating smoothly and accurately, and that new features are delivered on time and to specification. You will work alongside researchers and data scientists to help shape the Civiqs survey application as we expand our development team and build new research products and features.\n\nWe have an energized team of survey researchers with diverse backgrounds and skill sets. If youโre interested in a position that offers more than just a technical challenge, weโd like to hear from you. The ideal candidate for this position will love data as much as we do! You are an innovator, collaborator and have an innate dedication to excellence. You are self-motivated, efficient, and capable of delivering results with limited guidance. Come join our team and guide the future of public opinion research using the Civiqs panel!\n\nResponsibilities\n\n- Design, and code application solutions for Civiqs using industry best practices\n- Building a deep understanding of our frontend and backend systems, infrastructure, cloud services, and dev ops automation tools\n- Clearly and precisely communicate technical issues with other developers and non-technical stakeholders\n- Quickly identify and address bugs, anticipate run time issues involving code changes that may affect extremely large data sets\n- Write detailed automated test cases for new features\n- Work collaboratively with engineering team to coordinate complex releases often involving multiple systems and large data migrations\n- Partner with research team members to ensure that documented requirements meet the teamโs needs\n- Remain current on test, development, and deployment best practices\n- Be a team player, share knowledge, and collaborate through pairing, feedback, and discussions, etc.\n\nExperience\n\n- 5+ years experience in professional software development using Python and/or Ruby and Javascript\n- Production experience with ReactJS preferred\n- 3+ years experience working on an Agile, Kanban, or similar collaborative environment\n- Experience working with fully remote teams preferred\n\nQualifications\n\n- Extensive development experience in large and complex codebases\n- Working knowledge in systems or operations at OS and basic networking levels\n- Ability to write, run, and optimize raw SQL queries in MySQL or PostgreSQL; knowledge of other data storage a plus\n- Experience with containerized application development using Docker is preferred\n- Experience measuring system performance and implementing security best practices\n- Strong track record developing software using automated testing tools\n- Awareness of typical programming errors and the unexpected things users do whether accidentally or maliciously\n- Ability to analyze and debug distributed data processing systems\n- Motivated, organized, and self-directed technical leader\n- Critical thinker with thirst for knowledge and continuous improvement\n- Ability to work autonomously, take ownership, and deliver a quality software experience\n- Excellent communication skills and comfortable talking with team members at all levels\n- Willingness to become a Civiqs platform expert\n\nSALARY RANGE: $115,000 - $130,000\n\nThis position is a 40 hour/week, full-time exempt position and reports to Civiqsโ Engineering Manager. Candidates must be legally eligible to work in the United States. The position offers a flexible work environment, the ability to work remotely or from home, competitive salary, excellent benefits including: full medical, dental and vision benefits, optional 401K with a company match, remote worker stipend, a generous vacation package, traumatic grief leave, a professional development stipend, as well as employer-paid maternity/family leave. Our organizational commitment to personal growth and work-life balance reduces churn and encourages a very rewarding long term position.\n\nAt Civiqs, we believe that the diversity of ideas, experiences and cultures that our employees contribute to our organization help us be more effective in our work, and we are proud to be an inclusive and equal-opportunity workplace. The atmosphere in our office is energized by the dayโs news events, and people united by common cause. Weโre a company that loves learning and supports growth and training for all our employees.\n\nWomen, people of color, and LGBTQ+ individuals are strongly encouraged to apply. \n\nPlease mention the word **GRANDEUR** when applying to show you read the job post completely (#RMTguOTcuMTQuODc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$110,000 — $130,000/year\n
\n\n#Location\nUnitied States
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMTguOTcuMTQuODc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Cardinal Financial Company and want to re-open this job? Use the edit link in the email when you posted the job!
\nWho We Are:\n\nCardinal Financial is a nationwide direct mortgage lender whose mission is to prove that homeownership is possible for everyone. By bringing an open-minded approach to an often closed-minded industry, we're able to embrace every unique financial situation differently in order to craft the best possible loans for our borrowers. We pride ourselves on providing excellent service backed by our groundbreaking technology, and these two components of our process come together to complete a simple, personalized mortgage experience. But it all starts with our people.\n\nWe believe that no matter where you fit in our organization—Sales, Human Resources, Information Technology, or even re-stocking the break rooms with endless coffee supplies—everyone can influence the experience that we provide to our customers and our partners. We tell our customers and our partners that anything can be reimagined. So why not your career? Looking to join a company that values its people, innovates and expands on its proprietary technology, and is growing at a ridiculous rate?! Apply below!\n\nWho We Need:\n\nWe are looking for a DevOps Engineer to join the team managing the infrastructure for a national mortgage lender's technology platform. You will design, build, and support the infrastructure using modern tools like Terraform, Kubernetes (K8S) and GitLab in a Multi-Cloud environment. You will work with the Software Engineering, Production Systems, and Business Intelligence teams in a highly collaborative organization.\n\nWhat You Will Do:\n\n\n* Design, implement and maintain Infrastructure as Code (laC)\n\n* Improve code deployment and unit testing frameworks\n\n* Improve monitoring of infrastructure and applications.\n\n* Maintain and improve our security posture, ensure best practices are adhered to in new projects\n\n* Develop and maintain extract, transform and load mechanisms for data analysis\n\n* Manage your tasks and their priorities with feedback and review from a supportive team\n\n* Investigate new technologies and deploy them in support of the team\n\n\n\n\nWhat You Need:\n\n\n* At least 1 year managing cloud provider resources in AWS, Azure or GCP\n\n* Experience writing and maintaining complex Docker files\n\n* Experience writing CI / CD (Continuous Integration / Continuous Deployment) pipelines using tools such as GitLab, Jenkins\n\n* Implemented network, server, and application-status monitoring tools\n\n* 3+ years Linux / Unix experience\n\n* 1+ year experience with git\n\n* Experience with basic database administration and SQL\n\n* 1+ years networking and security experience a plus\n\n* Use of infrastructure as code tools such as Terraform or CloudFormation a plus\n\n* Use of Server provisioning software such as Ansible, Puppet or Chef a plus\n\n* Experience in container orchestration using tools like Kubernetes or Docker Swarm a plus\n\n* Knowledge of Python and/or Java a plus\n\n\n\n\nWhat We Offer:\n\n\n* Strength, Stability, and Vision\n\n* Highly engineered proprietary technology that is revolutionizing the mortgage industry\n\n* An empowered culture where your ideas are important and your voice matters\n\n* Opportunity for career growth\n\n* Benefits that become effective the first day of the month following your start date including - Medical, Dental, Vision, and much more\n\n* 401K w/ 50% match up to a maximum employee contribution of 5% - Effective the 1st of the month following 30-days of employment\n\n\n\n\nOur Technology:\n\nOur SaaS enterprise mortgage lending platform is a challenging and complex system that includes lender and borrower interfaces, workflow, document management, advanced automation, and integrations with external entities and services.\n\nThe server architecture is stateless, cleanly managing the business logic and persistence layer, exposed as a RESTful JSON API. The server is written using a combination of Java 8 on Jetty, and Node.js for asynchronous tasks. We persist our data in MySQL using MyBatis and use Redis for caching, metrics, and non-critical message queueing.\n\nThe UI uses a custom, JavaScript MVC framework with many modern techniques: dynamic code loading modules, client-side routing and templates, powerful data-binding features, integrated services, and advanced component architecture.\n\nWe develop on Macs and deploy on AWS. Our tools include: github, Jenkins, gradle, grunt, JAXB, iText, Aspose, IntelliJ IDEA, Pivotal Tracker. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Engineer, JavaScript, Finance, Java, Cloud, Python, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.