\nAbout AllTrails\n\n\nAllTrails is the most trusted and used outdoors platform in the world. We help people explore the outdoors with hand-curated trail maps along with photos, reviews, and user recordings crowdsourced from our community of millions of registered hikers, mountain bikers, and trail runners in 150 countries. AllTrails is frequently ranked as a top-5 Health and Fitness app and has been downloaded by over 75 million people worldwide.\n\n\nEvery day, we solve incredibly hard problems so that we can get more people outside having healthy, authentic experiences and a deeper appreciation of the outdoors. Join us! \n\n\nThis is a U.S.-based remote position. San Francisco Bay Area employees are highly encouraged to come into the office one day a week.\n\n\n\nWhat Youโll Be Doing:\n* Work cross-functionally to ensure data scientists have access to clean, reliable, and secure data, the backbone for new algorithmic product features\n* Build, deploy, and orchestrate large-scale batch and stream data pipelines to transform and move data to/from our data warehouse and other systems\n* Deliver scalable, testable, maintainable, and high-quality code\n* Investigate, test-for, monitor, and alert on inconsistencies in our data, data systems, or processing costs\n* Create tools to improve data and model discoverability and documentation\n* Ensure data collection and storage adheres to GDPR and other privacy and legal compliance requirements\n* Uphold best data-quality standards and practices, promoting such knowledge throughout the organization\n* Deploy and build systems that enable machine learning and artificial intelligence product solutions\n* Mentoring others on best industry practices\n\n\n\nRequirements:\n* Minimum of 6 years of experience working in data engineering\n* Expertise both in using SQL and Python for data cleansing, transformation, modeling, pipelining, etc.\n* Proficient in working with other stakeholders and converting requirements into detailed technical specifications; owning and leading projects from inception to completion\n* Proficiency in working with high volume datasets in SQL-based warehouses such as BigQuery\n* Proficiency with parallelized python-based data processing frameworks such as Google Dataflow (Apache Beam), Apache Spark, etc.\n* Experience using ELT tools like Dataform or dbt\n* Professional experience maintaining data systems in GCP and AWS\n* Deep understanding of data modeling, access, storage, caching, replication, and optimization techniques\n* Experienced with orchestrating data pipelines and Kubernetes-based jobs with Apache Airflow\n* Understanding of the software development lifecycle and CI/CD\n* Monitoring and metrics-gathering (e.g. Datadog, NewRelic, Cloudwatch, etc)\n* Willingness to participate in a weekly on-call support rotation - currently the rotation is monthly\n* Proficiency with git and working collaboratively in a shared codebase\n* Excellent documentation skills\n* Self motivation and a deep sense of pride in your work\n* Passion for the outdoors\n* Comfort with ambiguity, and an instinct for moving quickly\n* Humility, empathy and open-mindedness - no egos\n\n\n\nBonus Points:\n* Experience working in a multi-cloud environment\n* Experience with GIS, H3, or other mapping technologies\n* Experience with Amplitude\n* Experience with infrastructure-as-code, such as Terraform\n* Experience with machine learning frameworks and platforms such as VertexAI, SageMaker, MLFlow, or related frameworks\n\n\n\nWhat We Offer: \n* A competitive and equitable compensation plan. This is a full-time, salaried position that includes equity\n* Physical & mental well-being including health, dental and vision benefits\n* Trail Days: No meetings first Friday of each month to go test the app and explore new trails!\n* Unlimited PTO\n* Flexible parental leave \n* Remote employee equipment stipend to create a great remote work environment\n* Annual continuing education stipend\n* Discounts on subscription and merchandise for you and your friends & family\n* An authentic investment in you as a human being and your career as a professional\n\n\n\n\n\n$170,000 - $210,000 a yearThe successful candidateโs starting salary will be determined based on various factors such as skills, experience, training and credentials, as well as other business purposes or needs. It is not typical for a candidate to be hired at or near the top of the range of their role and compensation decisions are dependent on the factors and circumstances of each case.\n\nNature celebrates you just the way you are and so do we! At AllTrails weโre passionate about nurturing an inclusive workplace that values diversity. Itโs no secret that companies that are diverse in background, age, gender identity, race, sexual orientation, physical or mental ability, ethnicity, and perspective are proven to be more successful. Weโre focused on creating an environment where everyone can do their best work and thrive. \n\n\nAllTrails participates in the E-Verify program for all remote locations.\nBy submitting my application, I acknowledge and agree to AllTrails' Job Applicant Privacy Notice. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Senior and Engineer jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nThe Mortgage Engineering team is seeking a highly skilled and experienced Senior Backend Engineer with a strong focus on microservices architecture to join our team. The ideal candidate will be proficient in Java, and possess in-depth knowledge of Kafka, SQS, Redis, Postgres, Grafana, and Kubernetes. You are an expert in working with and scaling event-driven systems, webhooks, RESTful APIs and solving challenges with concurrency and distributed systems. As a Senior Backend Engineer at Ocrolus, you will be responsible for designing, developing, and maintaining highly scalable and reliable backend systems. You will work closely with product managers, designers, and other engineers to ensure our services meet the highest standards of performance and reliability, specifically tailored to the needs of the mortgage industry.\n\nKey Responsibilities:\n\n\n* Design, develop, and maintain backend services and microservices architecture using Java.\n\n* Implement event-driven systems utilizing Kafka and AWS SQS for real-time data processing and messaging.\n\n* Optimize and manage in-memory data stores with Redis for high-speed caching and data retrieval.\n\n* Develop and maintain robust database solutions with Postgres, ensuring data integrity and performance with PgAnalyze.\n\n* Deploy, monitor, and manage containerized applications using Kubernetes and Terraform and ensure its scalability and resilience and our manage cloud infrastructure.\n\n* Collaborate closely with product managers and designers to understand requirements and deliver technical solutions that meet business needs.\n\n* Develop and maintain RESTful APIs and gRPC services to support seamless integration with frontend applications and third-party services.\n\n* Ensure secure and efficient authentication and authorization processes using OAuth.\n\n* Manage codebases in a monorepo environment using Bazel for build automation.\n\n* Troubleshoot and resolve client support issues in a timely manner, ensuring minimal disruption to service.\n\n* Continuously explore and implement new technologies and frameworks to improve system performance and efficiency.\n\n* Write and maintain technical documentation on Confluence to document technical plans and processes, and facilitate knowledge sharing across the team.\n\n* Mentor junior engineers and contribute to the overall growth and development of the engineering team.\n\n\n\n\nRequired Qualifications:\n\n\n* Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field.\n\n* 5+ years of professional experience in backend development with a focus on microservices.\n\n* Proficiency in Java, with a strong preference for expertise in Java and the Spring framework.\n\n* Strong experience with Apache Kafka for building event-driven architectures.\n\n* Hands-on experience with AWS SQS for message queuing and processing.\n\n* Expertise in Redis for caching and in-memory data management.\n\n* Solid understanding of Postgres or other relational databases, including performance tuning, migrations, and optimization.\n\n* Proven experience with Kubernetes for container orchestration and management.\n\n* Proficiency in developing and consuming RESTful APIs and gRPC services.\n\n* Proficiency with command line and Git for version control and Github for code reviews.\n\n* Familiarity with OAuth for secure authentication and authorization.\n\n* Strong understanding of software development best practices, including version control, testing, and CI/CD automation.\n\n* Excellent problem-solving skills and the ability to work independently and as part of a team.\n\n* Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Experience working in the mortgage and fintech industries, with a deep understanding of domain-specific challenges and B2B SaSS requirements.\n\n* Experience managing codebases in a monorepo environment with Bazel for build automation.\n\n* Understanding of security best practices and implementation in microservices.\n\n* Experience with performance monitoring and logging tools such as Grafana, Sentry, PgAnalyze, Prometheus, and New Relic.\n\n* Familiarity with cloud platforms such as AWS.\n\n* Familiarity with Python.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Redis, Java, Cloud, Git, Senior, Junior, Engineer and Backend jobs that are similar:\n\n
$65,000 — $115,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nGurgaon, Haryana, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Constant Contact and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 2 years ago
\nAre you passionate about technology and staying up-to-date on industry trends? Do you consider yourself an advocate for change and eager to step outside your comfort zone? As a Software Engineer on our Data Platform team, you will be a crucial member of our agile team, working with innovative tools, processes, and people to engineer web applications that are distributed and consumed on a massive scale.\n\nThis is a remote-first US position. Or, if desired, you can work in a hybrid model from one of our offices in Boston, MA, Waltham, MA, Gainesville, FL, or Santa Monica, CA. Whatever it takes for you to be successful. \n\nWhat you'll do\n\n\n* Collaborate with Product Management, data scientists and senior members of the development team to understand the business needs and find the best technical solution for meeting those needs, finding bottlenecks, etc.\n\n* Build scalable, robust and maintainable infrastructure for consumers to derive insights and build models efficiently\n\n* Mentor junior members of the team on best practices, architecture, and development\n\n* Stay on top of industry trends and ensure the team appropriately utilizes the latest technologies and best practices\n\n* Be a key contributor to the creation of a long-term, scalable architecture.\n\n* Research emerging technologies and software\n\n\n\n\nWho you are\n\n\n* 3+ years developing large scale applications for data pipelines is required\n\n* Expert knowledge of Spark, EMR, ElasticSearch, streaming technologies and similar frameworks is a must to be successful in this role\n\n* Bonus points for knowledge in Apache projects: Nifi, Hudi, Presto & Flink\n\n* Knowledge of scripting languages like Python, Java, Groovy\n\n* Experience in developing applications for AWS ecosystem including Kubernetes, EMR, Athena, CloudFormation, CodePipeline, S3 \n\n* Experience in working with Java and Spring (REST API services)\n\n* Experience with unit testing and test driven development in Python, Java and JavaScript\n\n* Experience working with Git in a CI/CD environment\n\n* Proven ability to design future-proof, maintainable, large scale systems\n\n* Ability to articulate engineering design strategies related to scalability, performance, security, usability, and development platforms\n\n* Effective problem solving and analytical skills\n\n\n\n#LI-Remote #LI-HK1 #INDENG \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Analyst, Non Tech, SaaS, Accounting, Payroll, Education, Finance, Mobile, Senior, Excel, Legal, Design, Testing, Cloud, API, Backend, Shopify, Digital Nomad, Sales, Marketing, Engineer, Junior, Java, Git and Apache jobs that are similar:\n\n
$70,000 — $115,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nWaltham, Massachusetts, United States
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Nielsen and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
Data Science is at the core of Nielsenโs business. Our team of researchers come from diverse disciplines and they drive innovation, new product ideation, experimental design and testing, complex analysis and delivery of data insights around the world. We support all International Media clients and are located where our clients are.\n\nLead Data Scientist - Remoteย -ย 101791\nData Scienceย -ย Remoteย \n\nThe Lead Data Scientistโs primary responsibility in the Audio Data Science team is to develop creative solutions to enhance the data and analysis infrastructure and pipeline which underpins the survey quality for all Nielsen Audio survey products. ย In order to deliver high quality standards, the Data Scientist will work as subject matter expert on a team of analysts to establish, maintain and continuously improve data tools and processes supporting the Audio data science team. ย \nTasks will include developing system enhancements, procedural and technological documentation, working with cross functional teams to implement solutions into production systems, supporting survey methodology enhancement projects, and supporting client facing data requests.\n\nWhat will I do?\nMaintain and continuously improve the variety of data infrastructure, analysis, production and QA processes for the Audio Data Science team\nAssist in the transition of the data science tech infrastructure away from legacy systems and methods\nWork with cross-functional teams to implement and validate enhanced audience measurement methodologies\nBuild and refine data queries from large relational databases/data warehouses/data lakes for various analyses and/or requests\nUtilize tools such as Python, Tableau, AWS, Databricks etc. to independently develop, test and implement high quality custom, modular code to perform complex data analysis, visualizations, and answer client queries\nMaintain and update comprehensive documentation on departmental procedures, checklists and metrics\nImplement prevention and detection controls to ensure data integrity, as well as detect and address quality escapes\nWork closely with internal customers and IT personnel to improve current processes and engineer new methods, frameworks and data pipelines\nWork as an integral member of the Audio Data Science team in a time-critical production environment\nKey tasks include โ but are not limited to โ data integration, data harmonization, automation, examining large volumes of data, identifying & implementing methodological, process & technology improvements\nDevelop and maintain the underlying infrastructure to support forecasting & statistical models, machine learning solutions, big data pipelines (from internal and external sources) used in a production environment\n\nIs this for me?\nUndergraduate or graduate degree in mathematics, statistics, engineering, computer science, economics, business or fields that employ rigorous data analysis\nMust be proficient with Python (and Spark/Scala) to develop sharable software with the appropriate technical documentation\nExperience utilizing Gitlab, Git or similar to manage code development\nExperience utilizing Apache Spark, Databricks & Airflow\nExpertize with Tableau, or other data visualization software and techniques\nExperience in containerization such as Docker and/or Kubernetes\nExpertize in querying large datasets with SQL and of working with Oracle, Netezza, Data Warehouse and Data Lake data structures\nExperience in leveraging CI/CD pipelines\nExperience utilizing cloud computing platforms such as AWS, Azure, etc\nStrong ability to proactively gather information, work independently as well as within an multi disciplinary team\nE- Proficiency in MS Office suite (Excel, Access, PowerPoint and Word) and/or Google Office Apps (Sheets, Docs, Slides, Gmail)\n\nPreferred\nKnowledge of machine learning and data modeling techniques such as Time Series, Decision Trees, Random Forests, SVM, Neural Networks, Incremental Response Modeling, and Credit Scoring\nKnowledge of survey sampling methodologies\nKnowledge of statistical tests and procedures such as ANOVA, Chi-squared, Correlation, Regression, etc\n#LI-SF1\n\nABOUT NIELSEN\nAs the arbiter of truth, Nielsen Global Media fuels the media industry with unbiased, reliable data about what people watch and listen to. To discover whatโs true, we measure across all channels and platformsโ โfrom podcasts to streaming TV to social media. And when companies and advertisers are armed with the truth, they have a deeper understanding of their audiences and can accelerate growth.ย \nDo you want to move the industry forward with Nielsen? Our people are the driving force. Your thoughts, ideas and expertise can propel us forward. Whether you have fresh thinking around maximizing a new technology or you see a gap in the market, we are here to listen and take action. Our team is made strong by a diversity of thoughts, experiences, skills, and backgrounds. Youโll enjoy working with smart, fun, curious colleagues, who are passionate about their work. Come be part of a team that motivates you to do your best work!ย ย \n\nNielsen is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Data Science, Executive, Cloud, Git, Python, Engineer and Apache jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.