Chan Zuckerberg Biohub - San Francisco is hiring a
Remote AI ML HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub Network has an immediate opening for an AI/ML High Performance Computing (HPC) Principal Engineer. The CZ Biohub Network is composed of several new institutes that the Chan Zuckerberg Initiative created to do great science that cannot be done in conventional environments. The CZ Biohub Network brings together researchers from across disciplines to pursue audacious, important scientific challenges. The Network consists of four institutes throughout the country; San Francisco, Silicon Valley, Chicago and New York City. Each institute closely collaborates with the major universities in its local area. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports several 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the country, with the mission of understanding the mysteries of the cell and how cells interact within systems.\n\nThe Biohub is expanding its global scientific leadership, particularly in the area of AI/ML, with the acquisition of the largest GPU cluster dedicated to AI for biology. The AI/ML HPC Principal Engineer will be tasked with helping to realize the full potential of this capability in addition to providing advanced computing capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other computing needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ\n\n\n* Bachelorโs Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production compute, interconnect, storage hardware, software systems, storage subsystems\n\n* Configuring and administering parallel, network attached storage (Lustre, GPFS on ESS, NFS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, VAST, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.) and implementing fairshare, node sharing, backfill etc.. for compute and GPUs\n\n* Red Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* OpenACC, nvhpc, understanding of cuda driver compatibility issues\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software (Modules, SPACK)\n\n* Familiarity with source control tools (Git or SVN)\n\n* Experience with supporting use of popular ML frameworks such as Pytorch, Tensorflow\n\n* Familiarity with cybersecurity tools, methodologies, and best practices for protecting systems used for science\n\n* Experience with movement, storage, backup and archive of large scale data\n\n\n\n\nNice to have - \n\n\n* An advanced degree is strongly desired\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโs skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Node, Engineer and Linux jobs that are similar:\n\n
$57,500 — $85,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Oscatel is hiring a remote back-end engineer - working with Go and Node.js - to design and implement application layer solutions for a range of projects that underpin mobile telco carriers.\n\nWe're building modular scalable solutions around operations and service management for some of Europe's largest and most innovative carriers, all of which we will help you get to grips with. \n\nThe domain entails data-intensive services where security, data integrity and uptime are key. This presents lots of interesting design and coding challenges as we build and integrate our technology. We're utilising established IP-based protocols and frameworks, working with the standard Go libraries and Node.js.\n\nYou can anticipate a mix of well-defined mainly greenfield projects, along with substantial exploratory work as we validate concepts and build new solutions, maturing these into customisable long-term products. \n\nWe offer a culture where you may work under your own initiative as part of a collaborative effort towards common goals. It's an opportunity to be a formative team member, and to grow and improve together.\n\n**A flavour of upcoming projects**\n\n* High throughput transaction systems\n* Data management methods, analytics and alerting tools\n* QoS and fraud monitoring solutions\n* SS7 Signalling services and firewall \n* Product modules - analytics & metrics, financial reporting & billing, message routing\n* Helping promote sustainable development culture, methods and automation\n\n**We're looking for**\n\n* Someone with solid coding and solution design skills, accrued in a modern application back-end context\n* Production coding experience with Go, or good familiarity with Go in addition to another statically typed or back-end language that you've applied in a Linux environment\n* Familiarity with Node.js, TypeScript or JS\n* A service oriented architecture based approach, with strong API design and SQL skills\n* An ability to get to grips with complex requirements, to uphold security of sensitive data and to conform to best practices\n* A shared belief in writing code that's efficient, well-tested, documented and maintainable\n\n**Current ecosystem - we'll welcome your influence**\n\nGo | Node.js, TypeScript, React | gRPC | GraphQL | Elasticsearch | RabbitMQ | Kubernetes | Docker | AWS | Linux | Terraform | CircleCI | Atlassian stack | Github | Slack \n\n**Salary and benefits**\n\n* ยฃ55,000 - ยฃ70,000+ we're keeping an open mind\n* 30 days holiday (plus public holidays)\n* One weeks' extra pay each December\n* Pension contribution matched at 5%\n* Flexible working - tell us what you need - e.g. four day week\n* Personal development plan that you can shape, with budget for related training/certifications\n* Workstation and remote working equipment\n* Every three years - option to take six weeks' paid sabbatical \n\n**About us**\n\nOscatel provides software solutions that underpin the operations of mobile carriers. Founded in 2009, we're a fast-growing privately-owned business that's building a friendly and collaborative, all-new, development team. You'll become our third back-end colleague, joining a team designed around remote working. We're semi Agile, with daily standups and a flat team structure, where you can work on features through to fruition. We're looking for someone who wants to write great software that utilises our domain knowledge and creates value for our customers. https://www.oscatel.co.uk\n\n**Location: fully remote within UK**\n\n**Please note, we are only considering candidates who have an established right to work in the UK.**\n \n\nPlease mention the words **SENTENCE PEOPLE AGAIN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDA=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$70,000 — $100,000/year\n
\n\n#Location\nUK
# How do you apply?\n\nEven if your CV isn't ready, please talk with Vittoria at techfolk to find out more:\n\n0117 318 2447 | hello [at] techfolk.co.uk | [at] Vix_Rubino\n\nRECRUITERS: Oscatel has selected techfolk as its recruitment partner for this position and cold calling or speculative applications are not welcomed.
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for CircleBlack and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nWho are we?\n\nCircleBlack, Inc. provides financial advisors with technology that aggregates data, integrates other financial applications seamlessly, manages data from multiple custodians and delivers actionable intelligence about client portfolios, helping advisors better manage clients' wealth while growing and deepening advisor-client relationships. CircleBlack provides a leading platform built for the digital age, with a web-based and mobile application that can be taken anywhere and accessed anytime. CircleBlack's solution leverages proprietary technology that helps sustain the Company's unique competitive advantages. CircleBlack believes in making wealth management better, for both the investor and the advisor. For more information about CircleBlack, visit https://www.circleblack.com\n\n\nPosition Summary: \n\nWe are looking for a passionate, forward thinker Full-Stack Senior Software Engineer to design, develop and maintain our software solutions. You will be working on building quality performing software that enables financial advisors to deliver real time data to their clients while adapting to industry trends. Ideal candidates should be passionate about solving complex problems while being able to design, develop and support industry-leading solutions using Node.JS in a fast paced environment.\n\n\nResponsibilities:\n\n\n* Design and develop NodeJS APIs, integrations, analytics engines, and infrastructure tools.\n\n* Implement modern React user interfaces.\n\n* Lead migration from one core application to another, while proposing and implementing modern performance optimizations and scaling strategies, such as React user interface.\n\n* Drive software change while ensuring software deliverables comply with quality standards.\n\n* Collaborate effectively with stakeholders, designers and testers advising on impact, and performance to deliver the highest quality of software.\n\n* Perform code reviews, suggesting improvements and ensuring adherence to best practices.\n\n* Participate in an Agile development process.\n\n* Developing for a full stack of technologies including NodeJS, Nginx, React, Angular 1, MySQL, ElasticSearch, Kibana, PHP, Perl, Python and/or Ruby, Redis on AWS Linux servers.\n\n* Determine the root cause for complex software issues and develop practical solutions.\n\n* Serve as technical team lead and act as a mentor to allow for skill development through coaching, and training opportunities. \n\n\n\n\n\nCompetencies:\n\n\n* Ability to approach problems in a holistic manner, both tactical and strategic\n\n* Continuously aware of leveraging coaching and mentoring opportunities with junior software engineers \n\n* Creative, resourceful and outside- the- box thinking approach\n\n* Initiator; natural “fixer” mentality \n\n* Problem-solver and analytical\n\n\n\n\n\n\nEducation/Qualification:\n\n\n* 7+ years of application development experience; 4+ years experience using NodeJS. This is a must!\n\n* 2+ years of experience with MySQL database development\n\n* Experience building maintainable and testable code bases, including API and Database design in an agile environment and driving software change\n\n* Hands on experience integrating third-party SaaS providers using a variety of technologies including at least some the following: REST, SOAP, SAML, OAuth, OpenID, JWT, Salesforce\n\n* Experience working in a cloud environment, specifically AWS\n\n* Experience with non-relational databases such as Mongo, Redis, ElasticSearch\n\n* Ability to work independently, and remotely for the time being\n\n* BSc degree in Computer Science, Engineering or relevant field\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Engineer, Full Stack, Developer, Digital Nomad, React, Cloud, Python, Angular, API, Mobile, Junior, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Feed Media and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nWe're looking for a talented software developer to work on the server-side of our music delivery service: music ingestion and delivery, analytics collection and reporting, and web services used by our clients and curation team. Your goal will be to help us power sound everywhere. \n\nYou will need to form a full understanding of our data model, how we map our music providers’ schemas to it, and how we expose it to our clients. You will be working with our primary data stores: MySQL, ElasticSearch and Google BigTable. You will work on all our backend services, written primarily in Node.js, along with PHP and Bash scripts.\n\nYou will work hand-in-hand with our current engineering team, music curators, customer support, and product team to define and develop whatever is needed to advance our business. You will work with and develop our tools and services that:\n\n\n* Ingest files and metadata from our music providers\n\n* Analyze and extract metadata from our music collection\n\n* Deliver music to applications using our client SDKs\n\n* Track music playback and generate reports for our music providers and licensors\n\n* Generate analytics and reporting from playback and client data\n\n* Power our customer and client portals\n\n\n\n\nYou are, at heart, a problem solver, and eager to collaborate with others to deploy working solutions to advance our business. You are eager to understand how the tasks you are working on fit into the bigger picture and you proactively engage with others to clarify and refine what you are working on.\n\nWe manage our infrastructure with Chef and Terraform, and use Jenkins and Git for deployment. We strive for reliability and simplicity, and look to outside SaaS providers when the price is right. You will take part in managing and supporting our staging and production environments.\n\nAt Feed.fm, we believe the best candidates are excellent communicators, learn quickly, are compassionate, collaborate well with others, and have a strong desire to see their work in action. We are flexible with working hours and maintain a healthy balance between work and personal lives.\n\nRequirements\n\n\n* Outstanding communication skills and an eagerness to collaborate with others\n\n* Experience building and maintaining production web services\n\n* Strong Node.js development experience\n\n* Strong SQL experience, particularly MySQL\n\n* Familiarity with server side languages such as Go, Java, PHP.\n\n* Strong operational experience with Linux and cloud computing providers (AWS, Digital Ocean, Google Cloud, or others)\n\n* Experience with cloud provisioning and infrastructure management tools such as Terraform and Chef.\n\n* Experience with test driven development\n\n* Strong desire to ship, receive feedback, and improve\n\n* Ready to take responsibility for production systems\n\n\n\n\nBonus:\n\n\n* Experience with storing and processing event streams\n\n* Experience with generating reporting and analytics\n\n* Terraform, GraphQL, Kinesis, ElasticSearch experience\n\n* Past experience with server or client side audio processing\n\n* Past experience implementing or working with music-related applications\n\n* Contributions to open-source projects\n\n* Familiarity with frontend web development: Javascript (and popular frameworks, like React, jQuery, and others), HTML, CSS.\n\n\n\n\n Benefits:\n\n\n* Competitive salary\n\n* Equity\n\n* Comprehensive health, dental, vision and disability insurance along with a 401k matching plan.\n\n* Working with a talented team and having a huge impact!\n\n\n\n\nWhy Feed.fm?\n\nWe're providing music for companies you know and love: FitBit, Nautilus, Tonal, Mirror, American Eagle Outfitters, Bose, Life Fitness, and others\n\n\n* We're building a real company that generates value and pays artists\n\n* You will have a hand in all aspects of a growing platform\n\n* Experienced, down-to-earth coworkers and investors\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Engineer, Developer, Digital Nomad, JavaScript, Music, Elasticsearch, Cloud, PHP, Git, SaaS, Linux and Backend jobs that are similar:\n\n
$65,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Liquid Analytics and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nAt Liquid Analytics we use machine intelligence to enhance the user experience.\n\nAs a Sr. Java Software Developer you will be managing, extending and enhancing our premier Liquid Platform in AWS.\n\nYou are not just your average enterprise Java developer building applications on J2EE. You are building next generation cloud applications in AWS today.\n\nYour most important design driver is 'Performance'. You know how to model and build low latency asynchronous cloud applications. You know how to setup, manage a highly scalable Docker environment. You have solid Linux and networking to troubleshoot concurrency and latency issues with your asynchronous micro-services.\n\nYou have proven experience with various database models. You can structure, manage and query complex SQL databases. We use PostgresSQL RDS in AWS.\n\nYou have proven skills working with Columnar Databases like Amazon Dynamo or Cassandra.\n\nYou have worked with various JSON Document databases just as Google Firebase or MongoDB.\n\nYou know how to build and manage HTTPS REST APIs. We use the OpenAPIs.org specification, formerly know as Swagger.\n\nYou have working experience using Websockets to build highly responsive applications. One of the differentiations with the Liquid Platform is our highly scalable low latency sync engine that allows our clients to work online or offline with the lowest impact to battery life.\n\nYou have a minimum of 5 years of Java development skills, most of them building cloud applications in AWS.\n\nYou have an under-graduate degree in Computer Science. You have a solid math background are capable of adapting the right algorithm and data structure to most problems. For example you would know how to use a Linear Regression algorithm to analyze sales history information to create a highly optimized Goal Distribution process that runs in its own optimized in-memory data structure.\n\nJava is not your only language you are equally comfortable in Javascript and Python.\n\nIf you are a seasoned Java Developer building AWS cloud applications today then you already know all the required toolsets you need to succeed. You already stay up-to-date with the latest in Java and AWS. \nWe use Python for scripts. \nWe use Javascript to write most of our light-weight micro-services in Node.js containers. \nWe use the Webstorm IDE for Javascript development. \nWe use the Eclipse IDE for Java development. \nWe use GitHub for our code management. \nWe use Jenkins for automation.\n\nJoin the Liquid Analytics team and work on the next generation 'Experience Platform', Liquid Platform. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Java, Cloud, Engineer, Developer, Digital Nomad, JavaScript, Amazon, Firebase, Math, Python, Stats, Sales and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.