This job post is closed and the position is probably filled. Please do not apply. Work for Iterative and want to re-open this job? Use the edit link in the email when you posted the job!
Weโre a company that makes open source tools for data science and machine learning. You might know us from popular tools like [DVC](https://dvc.org) (Data Version Control) and [CML](https://cml.dev) (Continuous Machine Learning), or our [YouTube channel](https://www.youtube.com/channel/UC37rp97Go-xIX3aNFVHhXfQ). Our team is small, remote-first, and passionate about creating best practices for managing the complexities of data science.\n\nWeโre seeking a Developer Advocate to help us sustain and grow our active, worldwide community! \n\n## Job description\nAs an open source project, our community is everything. Our code, docs and outreach activities are fueled by community contributions, and user feedback is a huge driver for our product development. We invest heavily in building relationships with data scientists, engineers and developers around the world, from brand new contributors making their first pull request to longtime users working out a special use case. \n\nThe ideal candidate will:\n- Craft blog posts, release notes, and newsletters to share exciting developments on our projects, amazing contributions, and important technical Q&As. \n- Turn frequently-asked questions in the community into reusable resources, like tutorials and use-cases\n- Be a connector and maintain an engaged presence online. Respond to timely discussions and questions on social media and design shareable, creative campaigns for regular tweets and posts. \n- Enable community members of all skill levels to get involved. Welcome newcomers and encourage creative contributions. When folks make videos, blogs, or projects with our tools, help them boost the signal.\n- Lead community-building events like virtual meetups and present at relevant industry conferences\n- Be analytical and data-driven in creating useful content for the community. Define, report and analyze metrics to understand our communityโs needs and interests.\n\n## Skills weโre looking for\n- Experience in either data science or open source software.\n- Experience blogging or publishing technical content online. Bonus points if itโs related to open source or data science. \n- Experience building and/or managing a technical community.\n- Understanding of Git, Git-flow and CI/CD. You donโt have to be a superuser, but we make tools built around Git and youโll need to know how to use them.\n- Strong communication skills. Everyone on our team, from engineers to developer advocates, needs to be able to communicate over digital - platforms kindly and clearly.\n- Proficient written and spoken English is required. \n \n### Mega bonus skills\n- Knowledge of our tools and the MLOps space\n- A strong existing network in data science or open source\n\n### Perks\n- A fully remote job with a competitive salary and benefits package.\n- Our team culture is family-friendly. Our leadership includes several working parents, and our health insurance and unlimited PTO policies are designed with families in mind.\n- This role can grow with you. There are plenty of opportunities for leadership and autonomy in our small team! \n- Impact- you get to work on projects that are used every day by teams around the world! DVC and CML are used by researchers and data science teams across tech, finance, and government organizations. \n- You will get a [DeeVee](https://twitter.com/DVCorg/status/1314668966082572288/photo/1). \n\nPlease mention the words **BUILD CRUISE ARMOR** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4yNDM=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Aula and want to re-open this job? Use the edit link in the email when you posted the job!
#### Help our team make important decisions everyday.\n\nAs our first Senior Data Analyst at Aula you will help our leaders make data-informed decisions by analyzing and visualizing data to tell compelling stories. Our core focus here at Aula is in enabling both us and our educators to make data-informed decisions. You will be a key player at the center of our Data first approach.\n\n### TL;DR\nPermanent Full-time Remote Role for anyone between GMT +1 and GMT -6 hours\nReporting to: [Kelly Burdine (Data Lead)](https://www.linkedin.com/in/kellyburdine/)\nHigh-impact, high visibility role in a fast paced EdTech startup\nSalary: ยฃ73 - 85k ($100k-$116k) based on location and level of experience\nBehind the scenes look at our data stack; Fivetran, Stitch, dbt, Snowflake, and Metabase\n\n### Aula is the Learning Experience Platform (LXP) for higher education.\n\nWe are building a community-first product that brings together students and educators in a digital environment to interact and collaborate.\n\nOur teamโs core focus is enabling both Aula and our educators to make data informed decisions. We build things core to decision making within the business and a recommendations feature within our product that helps educators help students succeed.\n\n### You are\n\n* Passionate about data. You have the curiosity and self-drive to continuously learn new techniques and tools to extract value from data\n* Someone who has a lot of attention to detail. You have a high bar for quality and consistency\n* A storyteller. You know how to extract and communicate insights out of data to drive action\n* A problem solver. You know how to break a problem down and balance short term goals with long term objectives\n\n\n### You will\n\n* Be a part of a high-performing and inclusive team that values autonomy.\n* Work with your teammates to set high goals โ and celebrate success when we hit them.\n* Contribute to building a collaborative, productive and friendly remote workplace.\n* Double the percentage of educators who say they love Aula by analyzing and identifying trends in usage and survey data and providing those insights to our PMs to create the most impactful product roadmap\n* Identify the core action(s) that help educators realize the value of Aula and improve retention\n* Help our educators and partner institutions understand Aulaโs impact on student outcomes by analyzing product usage, university data, and survey data and creating top-notch reporting\n* Analyze product usage to understand which educator actions drive student engagement and provide those insights through a new insights and recommendation feature in Aula\n* Build, maintain, and communicate company level metrics\n* Design and build out analytic data models that support answering questions from our product team and company leaders\n* Build new and exciting revenue streams with Data\n\n**As an organisation we are remote first, asynchronous by default.** Our rituals allow us to have a team based all around the world and deliver reliable and accurate data solutions. We want this person to have several hours overlap with the rest of the data team so weโre hiring from anywhere Greenwich Mean Time (GMT) and -6 hours of GMT (London to Chicago). Weโre not hiring from places that require a graveyard shift to make the overlap happen.\n\n## What you need to do the Job\n\nThe most important thing about you is that you are curious and care deeply about how data empowers better decision making. You are transparent, considerate and ready to work hard to further our mission.\n\n### Must-have experience:\n\n* Strong SQL skills\n* Experience using analytic techniques such as segmentation, hypothesis testing, regression, and clustering\n* Experience building reports and dashboards with a modern data visualization tool (e.g. Metabase, Re:dash, Looker, Mode, Sisense)\n\n### Nice to have experience:\n\n* with a modern data warehouse (e.g. Snowflake, BigQuery, Redshift)\n* Working with Git and Python\n* Modeling data with dbt\n* Working in SaaS, analyzing product usage data and designing and analyzing A/B experiments\n* You approach everything you do proactively and are always looking for ways to improve and innovate. You understand that this is a small team in a startup that is scaling and are excited to contribute to the Aula story.\n\n## About the team\nOur Data team is made up of a team lead, analytics engineer, data analyst (you!), and user researcher. We work closely with our product, commercial, and operations teams to provide the reporting and insights for internal decision makers. We are also working to provide our students and educators data and insights within the Aula platform.\n\n**Our virtues are what makes Aula as an organisation unique.**\n\nOur commitment to diversity and inclusion should not be mistaken with building an organisation where 8 billion people would thrive. We lean into what makes Aula unique: weโre building an organisation where high performing people are silly ambitious about improving education - at scale.\n\n**We judge our virtues by what we do, not what we say.** \n\nOur virtues are\n* ๐ Silly Ambitious\n* ๐ Uncomfortably Focused\n* ๐ฃ Transparent by Default\n\n### A fair chance\n\nEvery role in the Aula team is open to applications from all sections of society. We believe in the superpowers and potential of everyone; regardless of race, religion or belief, ethnic origin, different physical ability, family structure, socio-economics, age, nationality or citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other difference that makes you, well, you.\n\nMore than just encouraging your application, we're committed to conscious inclusion that (we hope) cultivates an ethos of belonging, connection and shared purpose. Itโs this philosophy that drives us towards our mission, and we open our doors to those who share these motivations. \n\nPlease mention the words **DISCOVER ACTION VELVET** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4yNDM=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$100,000 — $120,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
\n\n#Location\nGMT -6 to GMT +1
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Breakthrough and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nBreakthrough is seeking a Data Science Engineer. This position will be part of the Technology Solutions team based in Green Bay, WI. We are seeking an individual who is passionate about data science and machine learning exploration and solution building.\nThis role will work closely with our engineering team to identify solutions, rapidly prototype and build enterprise solutions that integrate with our product platform. The ideal candidate will be intricately involved in running analytical experiments in a methodical manner and will regularly evaluate alternate models via theoretical approaches. This is the perfect opportunity to become a part of an innovative and energetic team that develops analysis tools which will influence both our products and our clients.\nPrimary Responsibilities:\n* Build machine learning models to make accurate predictions about the future based on past data. Pursue ideas and experiment to find interesting patterns and trends in the data that add value and create new business insights.\n* Design, build and test hypothesis with algorithms and machine learning methods. \n* Design requirement driven data models to support the enterprise product development and leverage a proprietary data lake structure.\n* Acquire, analyze, combine, synthesize and store data from a wide range of internal and external sources.\n* Bring a solid understanding of supervised and unsupervised machine learning methods.\n* Strong understanding of statistics and the ability to critically evaluate statistical models.\n* Communicate complex data analysis results clearly and understandably to business leaders and peers.\n\n\nWhat qualities are we looking for?\n* An individual who is passionate about data science with an inquisitive nature and drive to seek answers through data.\n* An individual who strives for excellence with a laser focus on team communication and facilitation of ideas.\n* An individual with a proactive attitude who works well in a fast-paced team environment.\n* An individual who communicates and collaborates well with IT and business teams.\n* A critical thinker with good problem-solving skills and an ability to multi-task.\n* Strong communication skills to express oneself clearly both verbally and in writing. Persistent, active listening skills.\n* Demonstrated leadership skills with a willingness to readily and voluntarily take ownership of project issues.\n* An individual that can work both at the strategic level and at the tactical level, holding others accountable while building team rapport and engagement.\n* Ability to develop and maintain positive working relationships throughout the organization.\n\n\nQualifications and Skills:\n* Degree in Computer Science or equivalent quantitative field and 5+ years of experience in a similar Data Science role.\n* Proven experience with R, Python, SQL, TensorFlow, Jupyter and ML Methods (e.g. Random Forests, Neural Nets).\n* Deep understanding of distributed data management systems and related applications\n* Mastery of data lake design and implementation considerations such as columnar storage formats and partitioning.\n* Experience working with and extracting value from large, disconnected and/or unstructured datasets.\n* Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management.\n* Experience building and optimizing 'big data' data pipelines, architectures and data sets.\n* Experience with modern CI/CD pipeline technologies involving git repositories, static code analysis, test-driven development etc.\n* Automated analysis optimizations based on performance metrics.\n* Experience with GCP technologies or related Cloud technology experience.\n* Experience with analytics tools such as Apache Beam, Spark, JupyterLab, etc. \n* Experience with modern infrastructure as code technologies like Docker, Kubernetes, Terrraform, and Airflow.\n* Excellent teamwork, coordination, influencing and communication skills.\n\n\nABOUT BREAKTHROUGH:Breakthrough is the expert in managing the energy and information that fuels the movement of goods globally. We use our patented technology and industry knowledge to create transportation energy and supply chain management strategies for the world's leading shippers. We remove the waste associated with distorted supply chain practices and reveal data-driven insights to give our clients a competitive advantage in their supply chains.\nWHY WORK AT BREAKTHROUGH? See our SMART, PASSIONATE, and EDGY team in action: https://www.breakthroughfuel.com/careers/ \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Data Science, Engineer, Cloud, Git and Apache jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
Weโre a company that makes open source tools for data science and machine learning. You might know us from popular tools like [DVC](https://dvc.org) (Data Version Control) and [CML](https://cml.dev) (Continuous Machine Learning), or our [YouTube channel](https://www.youtube.com/channel/UC37rp97Go-xIX3aNFVHhXfQ?). Our team is small, remote-first, and passionate about creating best practices for managing the complexities of data science.\n\nWeโre seeking a **Technical Community Manager** to help us sustain and grow our active, worldwide community! \n\n## Job description\nAs an open source project, our community is everything. Our code, docs and outreach activities are fueled by community contributions, and user feedback is a huge driver for our product development. We invest heavily in building relationships with data scientists, engineers and developers around the world, from brand new contributors making their first pull request to longtime users working out a special use case. \n\nThe ideal candidate will:\n\n* Enable community members of all skill levels to get involved. Welcome newcomers and encourage creative contributions. When folks make videos, blogs, or projects with our tools, help them boost the signal.\n* Be a connector and maintain an engaged presence online. Respond to timely discussions and questions on social media and design shareable, creative campaigns for regular tweets and posts. \n* Make blog posts, release notes, and newsletters to share exciting developments on our projects, amazing contributions, and important technical Q&As. \n* Turn frequently-asked questions in the community into reusable resources, like tutorials and use-cases\n* Lead community-building events like virtual meetups and our ambassador program. \n* Be analytical and data-driven in building and nurturing the community. Define, report and analyze metrics to understand our communityโs needs and growth. \n\n\n## Skills weโre looking for\n* Experience in either **data science or open source software**.\n* Experience building and/or managing a ** technical community**.\n* Understanding of Git, Git-flow and CI/CD. You donโt have to be a superuser, but we make tools built around Git and youโll need to know how to use them.\n* Experience blogging or publishing technical content online. Bonus points if itโs related to open source or data science. \n* Strong communication skills. Everyone on our team, from engineers to developer advocates, needs to be able to communicate over digital platforms kindly and clearly.\n* Proficient written and spoken English is required. \n \n**Mega bonus skills**\n* Knowledge of our tools and the MLOps space\n* A strong existing network in data science or open source\n\n## Perks\n* A fully remote job with a competitive salary and benefits package.\n* Our team culture is family-friendly. Our leadership includes several working parents, and our health insurance and unlimited PTO policies are designed with families in mind.\n* This role can grow with you. There are plenty of opportunities for leadership and autonomy in our small team! \n* Impact- you get to work on projects that are used every day by teams around the world! DVC and CML are used by researchers and data science teams across tech, finance, and government organizations. \n* You will get a DeeVee.\n\n \n\nPlease mention the words **GRASS EXCESS UNVEIL** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4yNDM=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nFill out our [online application](https://forms.gle/v4scKdfgTfYbXtGTA).
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Drops and want to re-open this job? Use the edit link in the email when you posted the job!
If you have a soft spot for bootstrapped, profitable companies with a meaningful product, and you want to use your data analyst / growth skills for good, youโll like this.\n\n\n\n\n\nAbout us:\n\n\n\nDropsโ goal is to turn language learning into a delightful game while ensuring effective learning. Our app is in the Appstore for 3 years, teaches 31 languages, was featured by both the App Store and Play Store multiple times - Editors' Choice on the Play Store - and the company is still run by the founders. We are a small, super-capable remote team mainly spread across Europe. Weโre working synchronously, so time zones matter for us. We communicate via Slack, Git and Trello and have a release twice a week. We want to be the no.1 app for vocabulary learning and we are getting there quickly with our current user base of 7 million, a monthly active of >800,000 and an average store rating of 4.7.\n\n\n\nYou can find us here: [http://drops.app.link/](http://drops.app.link/)\n\n\n\n\n\nAbout you:\n\n\n\nYouโll be responsible for improving the quantity, quality and reliability of the multivariant tests we run.\n\n\n\nYouโre a no-nonsense person, who is comfortable taking ownership of all aspects of our analytics & growth funnel, who has experience in working at a product company, and who can bring us insights, initiatives and execution that will help us grow.\n\n\n\nWe want everyone to see the big picture: this means you already pushed your boundaries outside of the strict data realm, and are knowledgeable about mobile and web growth frameworks, ASO, SEO, best practices regarding retention and monetization.\n\n\n\nWeโre building a small, but super capable team. Youโre naturally more interested in the fate of the product & driven to grow professionally, than in managing people.\n\n\n\nWe are looking for a missionary rather than a mercenary.\n\n\n\nWe value clear and honest communication and transparency, itโs the linchpin of our culture and current success and freedom. You will be involved in both high and low level decision making and will be available during European working hours (9AM - 6PM GMT).\n\n\n\nWe offer:\n\n\n\n* An awesomely compact 13 person team\n\n* Educational allowance\n\n* Fittness allowance\n\n* All the perks of remote working\n\n* 30 days of holiday per year (including Christmas and other holidays)\n\n* Quarterly team gathering somewhere in the world\n\n* Stock options from a high-growth, profitable company\n\n# Responsibilities\n
You will:\n\n* Work cross functionally with our engineers, designer and marketing teams on opportunities to initiate / manage / analyze the AB tests weโre running across multiple platforms.\n\n* Make sense out of all the data that comes in (mobile, web, iTunes, Play Store, our own users database).\n\n* Gather insights from the existing data we have, and initiate projects to help improve our KPIs.\n\n* Prefer to use a minimal set of simple tools to complex ones. (This is important for us)\n\n* Communicate effectively and often to ensure that the team is aligned.\n\n* Help the engineers structure the events on existing and new platforms & define our KPIs. \n\n# Requirements\nYou have:\n\n* At least 2 years of experience in working in product analytics, managing multivariant tests and their results.\n\n* At least 3 years of experience in using various Analytics / BI tools (from Amplitude to SQL queries)\n\n* Experience driving product growth in the consumer mobile app space.\n\n* Some experience in qualitative testing methodologies. Youโre not afraid with engaging with end-users if needed.\n\n* Knowledge about fundamental mobile and web growth mental models\n\n* Project management experience (everyone is managing projects at Drops)\n\n* Strong verbal and written communication skills and the ability to work well cross-functionally.\n\n* (Preferably) experience in all 4 pillars of the growth funnel: Acquisition, Activation, Retention, Monetization \n\nPlease mention the words **OUTDOOR CURRENT FACULTY** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4yNDM=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Data Science, Marketing, Executive, Analyst, Git and Mobile jobs that are similar:\n\n
$70,000 — $115,000/year\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Doximity and want to re-open this job? Use the edit link in the email when you posted the job!
Why work at Doximity?\n\nDoximity is the leading social network for healthcare professionals with over 70% of U.S. doctors as members. We have strong revenues, real market traction, and we're putting a dent in the inefficiencies of our $2.5 trillion U.S. healthcare system. After the iPhone, Doximity is the fastest adopted product by doctors of all time. Our founder, Jeff Tangney, is the founder & former President and COO of Epocrates (IPO in 2010), and Nate Gross is the founder of digital health accelerator RockHealth. Our investors include top venture capital firms who've invested in Box, Salesforce, Skype, SpaceX, Tesla Motors, Twitter, Tumblr, Mulesoft, and Yammer. Our beautiful offices are located in SoMa San Francisco.\n\nSkills & Requirements\n\n-3+ years of industry experience; M.S. in Computer Science or other relevant technical field preferred.\n-3+ years experience collaborating with data science and data engineering teams to build and productionize machine learning pipelines.\n-Fluent in SQL and Python; experience using Spark (pyspark) and working with both relational and non-relational databases.\n-Demonstrated industry success in building and deploying machine learning pipelines, as well as feature engineering from semi-structured data.\n-Solid understanding of the foundational concepts of machine learning and artificial intelligence.\n-A desire to grow as an engineer through collaboration with a diverse team, code reviews, and learning new languages/technologies.\n-2+ years of experience using version control, especially Git.\n-Familiarity with Linux, AWS, Redshift.\n-Deep learning experience preferred.\n-Work experience with REST APIs, deploying microservices, and Docker is a plus.\n\nWhat you can expect\n\n-Employ appropriate methods to develop performant machine learning models at scale, owning them from inception to business impact.\n-Plan, engineer, and deploy both batch-processed and real-time data science solutions to increase user engagement with Doximityโs products.\n-Collaborate cross-functionally with data engineers and software engineers to architect and implement infrastructure in support of Doximityโs data science platform.\n-Improve the accuracy, runtime, scalability and reliability of machine intelligence systems\n-Think creatively and outside of the box. The ability to formulate, implement, and test your ideas quickly is crucial.\n\nTechnical Stack\n\n-We historically favor Python and MySQL (SQLAlchemy), but leverage other tools when appropriate for the job at hand.\n-Machine learning (linear/logistic regression, ensemble models, boosted models, deep learning models, clustering, NLP, text categorization, user modeling, collaborative filtering, topic modeling, etc) via industry-standard packages (sklearn, Keras, NLTK, Spark ML/MLlib, GraphX/GraphFrames, NetworkX, gensim).\n-A dedicated cluster is maintained to run Apache Spark for computationally intensive tasks.\n-Storage solutions: Percona, Redshift, S3, HDFS, Hive, Neo4j, and Elasticsearch.\n-Computational resources: EC2, Spark.\n-Workflow management: Airflow.\n\nFun facts about the Data Science team\n\n-We have one of the richest healthcare datasets in the world.\n-We build code that addresses user needs, solves business problems, and streamlines internal processes.\n-The members of our team bring a diverse set of technical and cultural backgrounds.\n-Business decisions at Doximity are driven by our data, analyses, and insights.\n-Hundreds of thousands of healthcare professionals will utilize the products you build.\n-A couple times a year we run a co-op where you can pick a few people you'd like to work with and drive a specific company goal.\n-We like to have fun - company outings, team lunches, and happy hours! \n\nPlease mention the words **WAVE SPOT WORTH** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4yNDM=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Git, Python, Machine Learning, Data Science, Engineer, Linux, Docker and Apache jobs that are similar:\n\n
$80,000 — $122,500/year\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.