\nThe Mortgage Engineering team is seeking a highly skilled and experienced Senior Backend Engineer with a strong focus on microservices architecture to join our team. The ideal candidate will be proficient in Java, and possess in-depth knowledge of Kafka, SQS, Redis, Postgres, Grafana, and Kubernetes. You are an expert in working with and scaling event-driven systems, webhooks, RESTful APIs and solving challenges with concurrency and distributed systems. As a Senior Backend Engineer at Ocrolus, you will be responsible for designing, developing, and maintaining highly scalable and reliable backend systems. You will work closely with product managers, designers, and other engineers to ensure our services meet the highest standards of performance and reliability, specifically tailored to the needs of the mortgage industry.\n\nKey Responsibilities:\n\n\n* Design, develop, and maintain backend services and microservices architecture using Java.\n\n* Implement event-driven systems utilizing Kafka and AWS SQS for real-time data processing and messaging.\n\n* Optimize and manage in-memory data stores with Redis for high-speed caching and data retrieval.\n\n* Develop and maintain robust database solutions with Postgres, ensuring data integrity and performance with PgAnalyze.\n\n* Deploy, monitor, and manage containerized applications using Kubernetes and Terraform and ensure its scalability and resilience and our manage cloud infrastructure.\n\n* Collaborate closely with product managers and designers to understand requirements and deliver technical solutions that meet business needs.\n\n* Develop and maintain RESTful APIs and gRPC services to support seamless integration with frontend applications and third-party services.\n\n* Ensure secure and efficient authentication and authorization processes using OAuth.\n\n* Manage codebases in a monorepo environment using Bazel for build automation.\n\n* Troubleshoot and resolve client support issues in a timely manner, ensuring minimal disruption to service.\n\n* Continuously explore and implement new technologies and frameworks to improve system performance and efficiency.\n\n* Write and maintain technical documentation on Confluence to document technical plans and processes, and facilitate knowledge sharing across the team.\n\n* Mentor junior engineers and contribute to the overall growth and development of the engineering team.\n\n\n\n\nRequired Qualifications:\n\n\n* Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field.\n\n* 5+ years of professional experience in backend development with a focus on microservices.\n\n* Proficiency in Java, with a strong preference for expertise in Java and the Spring framework.\n\n* Strong experience with Apache Kafka for building event-driven architectures.\n\n* Hands-on experience with AWS SQS for message queuing and processing.\n\n* Expertise in Redis for caching and in-memory data management.\n\n* Solid understanding of Postgres or other relational databases, including performance tuning, migrations, and optimization.\n\n* Proven experience with Kubernetes for container orchestration and management.\n\n* Proficiency in developing and consuming RESTful APIs and gRPC services.\n\n* Proficiency with command line and Git for version control and Github for code reviews.\n\n* Familiarity with OAuth for secure authentication and authorization.\n\n* Strong understanding of software development best practices, including version control, testing, and CI/CD automation.\n\n* Excellent problem-solving skills and the ability to work independently and as part of a team.\n\n* Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Experience working in the mortgage and fintech industries, with a deep understanding of domain-specific challenges and B2B SaSS requirements.\n\n* Experience managing codebases in a monorepo environment with Bazel for build automation.\n\n* Understanding of security best practices and implementation in microservices.\n\n* Experience with performance monitoring and logging tools such as Grafana, Sentry, PgAnalyze, Prometheus, and New Relic.\n\n* Familiarity with cloud platforms such as AWS.\n\n* Familiarity with Python.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Redis, Java, Cloud, Git, Senior, Junior, Engineer and Backend jobs that are similar:\n\n
$65,000 — $115,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nGurgaon, Haryana, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nMemora Health works with leading healthcare organizations to make complex care journeys simple for patients and clinicians so that care is more accessible, actionable, and always-on. Our team is rapidly growing as we expand our programs to reach more health systems and patients, and we are excited to bring on a Senior Data Engineer. \n\nIn this role, you will have the responsibility of driving the architecture, design and development of our data warehouse and analytics solutions, alongside APIs that allow other internal teams to interact with our data. The ideal candidate will be able to collaborate effectively with Memoraโs Product Management, Engineering, QA, TechOps and business stakeholders.\n\nThis role will work closely with the cross-functional teams to understand customer pain points and identify, prioritize, and implement maintainable solutions. Ideal candidates will be driven not only by the problem we are solving but also by the innovative approach and technology that we are applying to healthcare - looking to make a significant impact on healthcare delivery. Weโre looking for someone with exceptional curiosity and enthusiasm for solving hard problems.\n\n Primary Responsibilities:\n\n\n* Collaborate with Technical Lead, fellow engineers, Product Managers, QA, and TechOps to develop, test, secure, iterate, and scale complex data infrastructure, data models, data pipelines, APIs and application backend functionality.\n\n* Work closely with cross-functional teams to understand customer pain points and identify, prioritize, and implement maintainable solutions\n\n* Promote product development best practices, supportability, and code quality, both through leading by example and through mentoring other software engineers\n\n* Manage and pare back technical debts and escalate to Technical Lead and Engineering Manager as needed\n\n* Establish best practices designing, building and maintaining data models.\n\n* Design and develop data models and transformation layers to support reporting, analytics and AI/ML capabilities.\n\n* Develop and maintain solutions to enable self-serve reporting and analytics.\n\n* Build robust, performant ETL/ELT data pipelines.\n\n* Develop data quality monitoring solutions to increase data quality standards and metrics accuracy.\n\n\n\n\nQualifications (Required):\n\n\n* 3+ years experience in shipping, maintaining, and supporting enterprise-grade software products\n\n* 3+ years of data warehousing / analytics engineering\n\n* 3+ years of data modeling experience\n\n* Disciplined in writing readable, testable, and supportable code in JavaScript, TypeScript, Node.js (Express), Python (Flask, Django, or FastAPI), or Java.\n\n* Expertise writing, and consuming RESTful APIs\n\n* Experience with relational or NoSQL databases (PostgreSQL, MySQL, MongoDB, Redis, etc.)\n\n* Experience with Data Warehouses (BigQuery, Snowflake, etc.)\n\n* Experience with analytical and reporting tools, such as Looker or Tableau\n\n* Inclination toward test-driven development and test automation\n\n* Experience with scrum methodology\n\n* Excels in mentoring junior engineers\n\n* B.S. in Computer Science or other quantitative fields or related work experience\n\n\n\n\nQualifications (Bonus):\n\n\n* Understanding of DevOps practices and technologies (Docker, Kubernetes, CI / CD, test coverage and automation, branch and release management)\n\n* Experience with security tooling in SDLC and Security by Design principles\n\n* Experience with observability and APM tooling (Sumo Logic, Splunk, Sentry, New Relic, Datadog, etc.)\n\n* Experience with an integration framework (Mirth Connect, Mule ESB, Apache Nifi, Boomi, etc..)\n\n* Experience with healthcare data interoperability frameworks (FHIR, HL7, CCDA, etc.)\n\n* Experience with healthcare data sources (EHRs, Claims, etc.)\n\n* Experience working at a startup\n\n\n\n\n\n\nWhat You Get:\n\n\n* An opportunity to work on a rapidly scaling care delivery platform, engaging thousands of patients and care team members and growing 2-3x annually\n\n* Enter a highly collaborative environment and work on the fun challenges of scaling a high-growth startup\n\n* Work alongside world-class clinical, operational, and technical teams to build and scale Memora\n\n* Shape how leading health systems and plans think about modernizing the care delivery experience for their patients and care teams\n\n* Improve the way care is delivered for hundreds of thousands of patients\n\n* Gain deep expertise about healthcare transformation and direct customer exposure with the countryโs most innovative health systems and plans\n\n* Ownership over your success and the ability to significantly impact the growth of our company\n\n* Competitive salary and equity compensation with benefits including health, dental, and vision coverage, flexible work hours, paid maternity/paternity leave, bi-annual retreats, Macbook, and a 401(k) plan\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, NoSQL, Senior, Engineer and Backend jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for SportyBet and want to re-open this job? Use the edit link in the email when you posted the job!
*Sporty's sites are some of the most popular on the internet, consistently staying in Alexa's list of top websites for the countries they operate in.*\n\nIn this role, youโll be responsible for developing microservices in a distributed deployment environment with an emphasis on containerisation with Docker and K8S. You wonโt just be writing simple CRUD applications, but instead will be working on the core logic of complex systems that are accessed millions of times a day. We wrote our system from scratch about 3 years ago, so youโll be working with the latest technology and wonโt have to worry about decades old legacy code.\n\nWe are hiring both Mid and Senior level Backend Engineers and a willingness to work in Springboot is fine - as long as you are willing to learn and have demonstrable experience in an object-oriented programming language.\n\n**Who We Are**\n\nSporty Group is a consumer internet and technology business with an unrivalled sports media, gaming, social and fintech platform which serves millions of daily active users across the globe via technology and operations hubs across more than 10 countries and 3 continents.\n\nThe recipe for our success is to discover intelligent and energetic people, who are passionate about our products and serving our users, and attract and retain them with a dynamic and flexible work life which empowers them to create value and rewards them generously based upon their contribution.\n\nWe have already built a capable and proven team of 300+ high achievers from a diverse set of backgrounds and we are looking for more talented individuals to drive further growth and contribute to the innovation, creativity and hard work that currently serves our users further via their grit and innovation.\n\n**Our Stack** *(we dont expect you to have all of these)*\n\nBackend Application Framework: Spring Boot (Java Config + Embedded Tomcat)\n\nMicro Service Framework: Spring Cloud Dalston (Netflix Eureka + Netflix Zuul + Netflix Ribbon + Feign)\n\nDatabase Sharding Middleware: Lede Cetus\n\nDatabase: MySQL and Oracle,Mybatis, Druid\n\nPublic Cache: AWS ElastiCache + Redis\n\nMessage Queue: Apache RocketMQ\n\nDistributed Scheduling: Dangdang Elastic Job\n\nData Index and Search: ElasticSearchLog\n\nReal-time Visualization: ElasticSearch + Logstash + Kibana\n\nBusiness Monitoring: Graphite + Grafana\n\nCluster Monitoring: Zabbix + AWS Cloudwatch\n\nTasking: Elastic Job\n\nServer: Netty \n\n**Responsibilities**\n\nDevelop highly-scalable mobile internet backends for millions of users\n\nWork with Product Owners and other development team members to determine new features and user stories needed in new / revised applications or large/complex development projects\n\nParticipate in code reviews with peers and managers to ensure that each increment adheres to original vision as described in the user story and all standard resource libraries and architecture patterns as appropriate\n\nRespond to support calls for applications in production for quick diagnosis and repair to keep things running smoothly for users\n\nParticipate in all team ceremonies including planning, grooming, product demonstration and team retrospectives\n\nMentoring less experienced team members \n\n**Requirements**\n\nExperience in Spring Boot, Spring Cloud, Spring Data and iBATIS\n\nStrong experience with highly-scalable web backends\n\nExperience designing highly transactional systems\n\nAdvanced proficiency in Object Oriented Design (OOD) and analysis\n\nAdvanced proficiency in application of analysis / design engineering functions\n\nAdvanced proficiency in application of non-functional software qualities such as resiliency and maintainability\n\nAdvanced proficiency in modern behavior-driven testing techniques\n\nDeep understanding of Microservices\n\nProficient in SQL\n\nExpert knowledge of application development with technologies like RabbitMQ, MySQL, Redis etc\n\nStrong experience with container and cloud solutions such as Docker, Kubernetes and AWS Cloud\n\nAn ability to work independently\n\nExcellent communication skills\n\nBased in Europe\n\n**Benefits**\n\nQuarterly and flash bonuses\n\nFlexible working hours\n\nTop-of-the-line equipment\n\nEducation allowance \n\nReferral bonuses \n\nAnnual company holidays - weโre still hoping to make it to Koh Samui in 2021!\n\nHighly talented, dependable co-workers in a global, multicultural organization\n\nWe score 100% on The Joel Test\n\nOur EU team is small enough for you to be impactful \n\nOur business is established and successful, offering stability and security to our employees \n\nPlease mention the words **TWO STILL PAUSE** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMjU=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$40,000 — $90,000/year\n
\n\n#Location\nEurope
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Surge and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nPosition: Sr. Python Developer\n\n\nResponsibilities:\nThe Engineer will actively participate with Scrum development teams and meetings. Additionally, the Engineer will be responsible for working with a highly functional team developing and automating data ingest, optimizing system and search performance, integration with enterprise authentication services, and establishing/improving system monitoring while maintaining established security protocols development, test, and production systems\n• Senior Python Developer with good experience in Python, Pandas/NumPy/SciPy, RESTful/REST\n• Backend = Python\n• Frontend = Angular or React\n• Experience with node.js would be helpful \n• Expertise in at least one popular Python framework (like Django, Flask, or Tornado) and Spark/Kafka/Hadoop (plus)\n• Full Stack Engineer capable of designing solutions, writing code, testing code, automating test and deployment \n• Overall delivery of software components working in collaboration with product and design teams \n• Collaborating with other technology teams to ensure integrated end-to-end design and integration. \n• Enforcing existing process guidelines; drives new processes, guidelines, team rules, and best practices. \n• Ready, willing, and able to pick up new technologies and pitch in on story tasks (design, code, test, CI/CD, deploy etc.)\n• Ensures efficient execution of overall product delivery by prioritizing, planning and tracking sprint progress. (This can include the development of shippable code\n\nQualifications:\n• Expert with Python Development\n• 10+ years of Python Development experience \n• Bachelor/Master’s Degree in Computer science or any related quantitative field.\n• Knowledgeable in cloud platforms (preferable AWS: both traditional EC2 and serverless Lambda)\n•Deep Experience with micro-services architecture, CI/CD solutions (including Docker), DevOps principles\n• Understanding of the threading limitations of Python, and multi-process architecture\n• Solid foundation and understanding of relational and NoSQL database principles.\n• Experience working with numerical/quantitative systems, e.g., pandas, NumPy, SciPy, and Apache Spark. \n• Experience in developing and using RESTful APIs. \n• Expertise in at least one popular Python framework (like Django, Flask, or Tornado) \n• Experience in writing automated unit, integration, regression, performance, and acceptance tests. \n• Solid understanding of software design principles\n• Proven track record of executing on the full product lifecycle (inception through deprecation) to create highly scalable and flexible RESTful APIs to enable an infinite number of digital products.\n• Self-directed with a start-up/entrepreneur mindset.\n• Ravenous about learning technology and problem-solving.\n• Strong writing and communication skills. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Senior, Developer, Digital Nomad, DevOps, Serverless, Cloud, NoSQL, Angular, Engineer, Apache and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Security Scorecard - We are revolutionizing the cybersecurity industry and want to re-open this job? Use the edit link in the email when you posted the job!
\nOpportunity\n\nSecurityScorecard is hiring a DevOps Engineer to bridge the gap between our global development and operational teams who is motivated to help continue automating and scaling our infrastructure. The DevOps Engineer will be responsible for setting up and managing the operation of project development and test environments as well as the software configuration management processes for the entire application development lifecycle. Your role would be to ensure the optimal availability, latency, scalability, and performance of our product platforms. You would also be responsible for automating production operations, promptly notifying backend engineers of platform issues, and checking long term quality metrics.\n\nOur infrastructure is based on AWS with a mix of managed services like RDS, ElastiCache, and SQS, as well as hundreds of EC2 instances managed with Ansible and Terraform. We are actively using three AWS regions, and have equipment in several data centers across the world.\n\nRegions: North America (GMT-7.00) Mountain time - (GMT-4.00) Atlantic time\n\nResponsibilities\n\n\n* Training, mentoring, and lending expertise to coworkers with regards to operational and security best practises. \n\n* Reviewing and providing feedback on GitHub Pull Requests to team members AND development teams- a significant percentage of our Software Engineers have written Terraform.\n\n* Identifying opportunities for technical and process improvement and owning the implementation. \n\n* Championing the concepts of immutable containers, Infrastructure as Code, stateless applications, and software observability throughout the organization.\n\n* Systems performance tuning with a focus on high availability and scalability.\n\n* Building tools to ease the usability and automation of processes\n\n* Keeping products up and operating at full capacity\n\n* Assisting with migration processes as well as backup and replication mechanisms\n\n* Working on a large-scale distributed environment where you were focused on scalability/reliability/performance\n\n* Ensuring proper monitoring / alerting are configured\n\n* Investigating incidents and performance lapses\n\n\n\n\nCome help us with projects such as…\n\n\n* Extending our compute clusters to support low latency, on-demand job execution\n\n* Turning pets into cattle\n\n* Cross region replication of systems and corresponding data to support low latency access\n\n* Rolling out application performance monitoring to existing services, extending integrations where required\n\n* Migration from self hosted ELK to a SaaS stack\n\n* Continuous improvement of CI/CD processes making builds & deployments faster, safer, and more consistent\n\n* Extending a Global VPN WAN to a datacenter with IPSec+BGP\n\n\n\n\nRequirements\n\n\n* 3+ years of DevOps and/or Operations experience in a Linux based environment\n\n* 1+ years of production environment experience with Amazon Web Services (AWS)\n\n* 1+ years using SQL databases (MySQL, Oracle, Postgres)\n\n* Strong scripting abilities (bash/python)\n\n* Strong Experience with CI/CD processes (Jenkins, Ansible) and automated configuration tools (Puppet/Chef/Ansible)\n\n* Experience with container orchestration (AWS ECS, Kubernetes, Marathon/Mesos)\n\n* Ability to work as part of a highly collaborative team\n\n* Understanding of monitoring tools like DataDog\n\n* Strong written and verbal communication skills\n\n\n\n\nNice to Have\n\n\n* You knew exactly what was meant by "Turning pets into cattle"\n\n* Experience working with Kubernetes on bare-metal and/or the AWS Elastic Kubernetes Service.\n\n* Experience with RabbitMQ, MongoDB, or Apache Kafka.\n\n* Experience with Presto or Apache Spark.\n\n* Familiarity with computation orchestration tools such as HTCondor, Apache Airflow, or Argo.\n\n* Understanding of network concepts- OSI layers, firewalls, DNS, split horizon DNS, VPN, routing, BGP, etc.\n\n* A deep understanding of AWS IAM, and how it interacts with S3 buckets.\n\n* Experience with SAFe.\n\n* Strong programming skills in 2+ languages.\n\n\n\n\nTooling We Use\n\n\n* Data definition, format and interfaces\n\n\n\n* Definitions - Protobuf V3\n\n* Normalize from - JSON / XML / CSV\n\n* Normalize to - Protobuf / ORC\n\n* Interfaces - REST API(s) and object store buckets\n\n\n\n* Cloud Services - Amazon Web Services\n\n* Databases: Postgresql, PrestoDB\n\n* Cache: Redis, Varnish\n\n* Languages: Python / C++14 / Scala / Golang / Javascript / Ruby / Java\n\n* Job Orchestration - HTCondor / Apache Airflow / Rundeck\n\n* Analytics - Spark \n\n* Storage: NFS/EFS, AWS S3, HDFS\n\n* Computation - Docker Containers / VMs / Metal / EMR\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, InfoSec, Senior, Engineer, JavaScript, Amazon, Python, Scala, Ruby, SaaS, Golang, Apache, Linux and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.