\nWhy TrueML?\n \nTrueML is a mission-driven financial software company that aims to create better customer experiences for distressed borrowers. Consumers today want personal, digital-first experiences that align with their lifestyles, especially when it comes to managing finances. TrueMLโs approach uses machine learning to engage each customer digitally and adjust strategies in real time in response to their interactions.\n \nThe TrueML team includes inspired data scientists, financial services industry experts and customer experience fanatics building technology to serve people in a way that recognizes their unique needs and preferences as human beings and endeavoring toward ensuring nobody gets locked out of the financial system.\n\n\nAbout the Role:\n\n\nAs a Senior Data Engineer II, you will play a pivotal role in designing, building, and maintaining our cutting-edge data LakeHouse platform. You will leverage open table formats like Apache Iceberg to create scalable, reliable data solutions that enable optimized query performance across a broad spectrum of analytical workloads and emerging data applications. In this role, you'll develop and operate robust data pipelines, integrating diverse source systems and implementing efficient data transformations for both batch and streaming data.\n\n\n\nWork-Life Benefits\n* Unlimited PTO\n* Medical benefit contributions in congruence with local laws and type of employment agreement\n\n\n\nWhat you'll do:\n* Building Data LakeHouse: In the Senior Data Engineer II role, you will design, build, and operate robust data lakehouse solutions utilizing open table formats like Apache Iceberg. Your focus will be on delivering a scalable, reliable data lakehouse with optimized query performance for a wide range of analytical workloads and emerging data applications.\n* Pipeline and Transformation: Integrate with diverse source systems and construct scalable data pipelines. Implement efficient data transformation logic for both batch and streaming data, accommodating various data formats and structures.\n* Data Modeling: Analyze business requirements and profile source data to design, develop, and implement robust data models and curated data products that power reporting, analytics, and machine learning applications.\n* Data Infrastructure: Develop and manage a scalable AWS cloud infrastructure for the data platform, employing Infrastructure as Code (IaC) to reliably support diverse data workloads. Implement CI/CD pipelines for automated, consistent, and scalable infrastructure deployments across all environments, adhering to best practices and company standards.\n* Monitoring and Maintenance: Monitor data workloads for performance and errors, and troubleshoot issues to maintain high levels of data quality, freshness, and adherence to defined SLAs.\n* Collaboration: Collaborate closely with Data Services and Data Science colleagues to drive the evolution of our data platform, focusing on delivering solutions that empower data users and satisfy stakeholder needs throughout the organization.\n\n\n\nA successful candidate will have:\n* Bachelor's degree in Computer Science, Engineering, or a related technical field (Master's degree is a plus).\n* 5+ years of hands-on engineering experience (software or data), with a strong emphasis on 3+ years in data-focused roles.\n* Experience implementing data lake and data warehousing platforms.\n* Strong Python and SQL skills applied to data engineering tasks.\n* Proficiency with the AWS data ecosystem, including services like S3, Glue Catalog, IAM, and Secrets Manager.\n* Experience with Terraform and Kubernetes.\n* Track record of successfully building and operationalizing data pipelines.\n* Experience working with diverse data stores, particularly relational databases.\n\n\n\nYou might also have:\n* Experience with Airflow, DBT, and Snowflake. \n* Certification in relevant technologies or methodologies.\n* Experience with streaming processing technology, e.g., Flink, Spark Streaming.\n* Familiarity with Domain-Driven Design principles and event-driven architectures.\n* Certification in relevant technologies or methodologies.\n\n\n\n\n\n$62,000 - $77,000 a yearCompensation Disclosure: This information reflects the anticipated base salary range for this position based on current national data. Minimums and maximums may vary based on location. Individual pay is based on skills, experience, and other relevant factors.\n\nThis role is only approved to hire within the following LatAm countries: Mexico, Argentina, or Dominican Republic.\n\n\nWe are a dynamic group of people who are subject matter experts with a passion for change. Our teams are crafting solutions to big problems every day. If youโre looking for an opportunity to do impactful work, join TrueML and make a difference.\n\n\nOur Dedication to Diversity & Inclusion\n \nTrueML is an equal-opportunity employer. We promote, value, and thrive with a diverse & inclusive team. Different perspectives contribute to better solutions, and this makes us stronger every day. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $135,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout HighLevel:\nHighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.\n\n\nOur Website - https://www.gohighlevel.com/\nYouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g\nBlog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/\n\n\nOur Customers:\nHighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.\n\n\nScale at HighLevel:\nWe operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.\n\n\nAbout the Role:\nWe are seeking a talented and motivated data engineer to join our team who will be responsible for designing, developing, and maintaining our data infrastructure and developing backend systems and solutions that support real-time data processing, large-scale event-driven architectures, and integrations with various data systems. This role involves collaborating with cross-functional teams to ensure data reliability, scalability, and performance. The candidate will work closely with data scientists, analysts and software engineers to ensure efficient data flow and storage, enabling data-driven decision-making across the organisation.\n\n\n\nResponsibilities:\n* Software Engineering Excellence: Write clean, efficient, and maintainable code using JavaScript or Python while adhering to best practices and design patterns\n* Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures\n* Data Pipeline Development: Design, develop and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources\n* Data Storage & Warehousing: Build and optimize databases, data lakes and data warehouses (e.g. Snowflake) for high-performance querying\n* Data Integration: Work with APIs, batch and streaming data sources to ingest and transform data\n* Performance Optimization: Optimize queries, indexing and partitioning for efficient data retrieval\n* Collaboration: Work with data analysts, data scientists, software developers and product teams to understand requirements and deliver scalable solutions\n* Monitoring & Debugging: Set up logging, monitoring, and alerting to ensure data pipelines run reliably\n* Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose innovative solutions to address them\n\n\n\nRequirements:\n* 3+ years of experience in software development\n* Education: Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field\n* Strong Problem-Solving Skills: Ability to debug and optimize data processing workflows\n* Programming Fundamentals: Solid understanding of data structures, algorithms, and software design patterns\n* Software Engineering Experience: Demonstrated experience (SDE II/III level) in designing, developing, and delivering software solutions using modern languages and frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java)\n* ETL Tools & Frameworks: Experience with Airflow, dbt, Apache Spark, Kafka, Flink or similar technologies\n* Cloud Platforms: Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3, Glue, Redshift)\n* Databases & Warehousing: Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES)\n* Version Control & CI/CD: Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines for deployment\n* Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment\n* Experience with data visualization tools (e.g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines and devops practices are a plus\n\n\n\n\n\n\nEEO Statement:\nThe company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.\n\n\n#LI-Remote #LI-NJ1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, JavaScript, Cloud, API, Marketing, Sales, Engineer and Backend jobs that are similar:\n\n
$60,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nDelhi
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nHinge is the dating app designed to be deleted\n\n\nIn today's digital world, finding genuine relationships is tougher than ever. At Hinge, weโre on a mission to inspire intimate connection to create a less lonely world. Weโre obsessed with understanding our usersโ behaviors to help them find love, and our success is defined by one simple metricโ setting up great dates. With tens of millions of users across the globe, weโve become the most trusted way to find a relationship, for all.\n\n\nCollaborate with a cross-disciplinary team to build features and work with other engineers to plan out projects. Design and build backend systems with an emphasis on quality and scalability. Own complex projects end-to-end and effectively communicate to stakeholders. Work in a cloud native tech stack: Kubernetes, AWS, Go web services, Postgres, Redis, Kafka. Be a thought partner for backend team strategy and technical direction. Create and maintain feedback cycles with your peers and manager. Operate and maintain production systems. Support and mentor junior developers. Assist with team hiring and learning. Use strong communication skills (written and verbal) to provide product and project ideas to contribute to trust and safety goal. Telecommuting may be permitted. When not telecommuting, must report to 809 Washington St, New York, NY 10014. Salary: $169,229 - $220,000 per year.\n \n \nMinimum Requirements: Bachelor's degree or U.S. equivalent in Electrical Engineering, Computer Science, Computer Engineering, Software Engineering, Information Technology, or related field, plus 5 years of professional experience as Software Engineer, Software Developer, or any occupation/position/job title involving building backend infrastructures. In lieu of a Bachelor's degree plus 5 years of experience, the employer will accept a Masterโs degree or U.S. equivalent in Electrical Engineering, Computer Science, Computer Engineering, Software Engineering, Information Technology, or related field plus 3 years of professional experience as Software Engineer, Software Developer, or any occupation/position/job title involving building backend infrastructures. Must also have the following: 3 years of professional experience building backend infrastructures for consumer-facing features (Business to Consumer) built on iOS and Android; 3 years of professional experience handling large volumes (millions daily) of data within AWS using Python and Golang scripting languages and handling cloud-based container including Docker and Kubernetes; 3 years of professional experience handling data and event streaming using Apache Spark and handling data storage using relational databases including MySQL and NoSQL database including PostgreSQL and Redis; 3 years of professional experience performing and employing software engineering best practices for the full software development life cycle (including coding standards, code reviews, source control management, build processes, testing, and operations); 2 years of professional experience performing backend software engineering (including leading and collaborating with Internal Tooling, Bad Actor Detection, Privacy & Compliance and Safety Product teams across web and apps) and developing backend infrastructures to drive systems that support the trust and safety of users with microservices written in Golang; 2 years of professional experience leading and creating project roadmaps of deployments for B2C web applications and mobile apps (including iOS and Android) and breaking down step to designate to peers; and 2 years of professional experience reviewing peer code and mentoring junior engineers.\n \nPlease send resume to: [email protected]. Please specify ad code [WLLL].\n\n\n\n\n$169,229 - $220,000 a yearFactors such as scope and responsibilities of the position, candidate's work experience, education/training, job-related skills, internal peer equity, as well as market and business considerations may influence base pay offered. This salary range is reflective of a position based in New York, New York.\n\n#LI-DNI\n\n\nAs a member of our team, youโll enjoy:\n\n\n401(k) Matching: We match 100% of the first 10% of pre-tax 401(k) contributions you make, up to a maximum of $10,000 per year.\n\n\nProfessional Growth: Get a $3,000 annual Learning & Development stipend once youโve been with us for three months. You also get free access to Udemy, an online learning and teaching marketplace with over 6000 courses, starting your first day.\n\n\nParental Leave & Planning: When you become a new parent, youโre eligible for 100% paid parental leave (20 paid weeks for both birth and non-birth parents.)\n\n\nFertility Support: Youโll get easy access to fertility care through Carrot, from basic treatments to fertility preservation. We also provide $10,000 toward fertility preservation. You and your spouse/domestic partner are both eligible.\n\n\nDate Stipend: All Hinge employees receive a $100 monthly stipend for epic datesโ Romantic or otherwise. Hinge Premium is also free for employees and their loved ones.\n\n\nERGs: We have eight Employee Resource Groups (ERGs)โAsian, Unapologetic, Disability, LGBTQIA+, Vibras, Women/Nonbinary, Parents, and Remoteโthat hold regular meetings, host events, and provide dedicated support to the organization & its community.\n\n\nAt Hinge, our core values areโฆ\n\n\nAuthenticity: We share, never hide, our words, actions and intentions.\n\n\nCourage: We embrace lofty goals and tough challenges.\n\n\nEmpathy: We deeply consider the perspective of others.\n\n\nDiversity inspires innovation\n\n\nHinge is an equal-opportunity employer. We value diversity at our company and do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We believe success is created by a diverse workforce of individuals with different ideas, strengths, interests, and cultural backgrounds.\n\n\nIf you require reasonable accommodation to complete a job application, pre-employment testing, or a job interview or to otherwise participate in the hiring process, please contact [email protected]. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Docker, Cloud, NoSQL, Mobile, Senior, Junior, Golang, Engineer and Backend jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nNew York, New York
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nVerana Health, a digital health company that delivers quality drug lifecycle and medical practice insights from an exclusive real-world data network, recently secured a $150 million Series E led by Johnson & Johnson Innovation โ JJDC, Inc. (JJDC) and Novo Growth, the growth-stage investment arm of Novo Holdings. \n\nExisting Verana Health investors GV (formerly Google Ventures), Casdin Capital, and Brook Byers also joined the round, as well as notable new investors, including the Merck Global Health Innovation Fund, THVC, and Breyer Capital.\n\nWe are driven to create quality real-world data in ophthalmology, neurology and urology to accelerate quality insights across the drug lifecycle and within medical practices. Additionally, we are driven to advance the quality of care and quality of life for patients. DRIVE defines our internal purpose and is the galvanizing force that helps ground us in a shared corporate culture. DRIVE is: Diversity, Responsibility, Integrity, Voice-of-Customer and End-Results. Click here to read more about our culture and values. \n\n Our headquarters are located in San Francisco and we have additional offices in Knoxville, TN and New York City with employees working remotely in AZ, CA, CO, CT, FL, GA, IL, LA, MA, NC, NJ, NY, OH, OR, PA, TN, TX, UT , VA, WA, WI. All employees are required to have permanent residency in one of these states. Candidates who are willing to relocate are also encouraged to apply. \n\nJob Title: Data Engineer\n\nJob Intro:\n\nAs a Data/Software Engineer at Verana Health, you will be responsible for extending a set of tools used for data pipeline development. You will have strong hands-on experience in design & development of cloud services. Deep understanding of data quality metadata management, data ingestion, and curation. Generate software solutions using Apache Spark, Hive, Presto, and other big data frameworks. Analyzing the systems and requirements to provide the best technical solutions with regard to flexibility, scalability, and reliability of underlying architecture. Document and improve software testing and release processes across the entire data team.\n\nJob Duties and Responsibilities:\n\n\nArchitect, implement, and maintain scalable data architectures to meet data processing and analytics requirements utilizing AWS and Databricks\n\nAbility to troubleshoot complex data issues and optimize pipelines taking into consideration data quality, computation and cost.\n\nCollaborate with cross-functional teams to understand and translate data needed into effective data pipeline solutions\n\nDesign solutions to solving problems related to ingestion and curation of highly variable data structures in a highly concurrent cloud environment.\n\nRetain metadata for tracking of execution details to reproducibility and providing operational metrics.\n\nCreate routines to add observability and alerting to the health of pipelines.\n\nEstablish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.\n\nResearch , perform proof-of-concept and leverage performant database technologies(like Aurora Postgres, Elasticsearch, Redshift) to support end user applications that need sub second response time.\n\nParticipate in code reviews.\n\nProactive in staying updated with industry trends and emerging technologies in data engineering.\n\nDevelopment of data services using RESTful APIโs which are secure(oauth/saml), scalable(containerized using dockers), observable (using monitoring tools like datadog, elk stack), documented using OpenAPI/Swagger by using frameworks in python/java and automated CI/CD deployment using Github actions.\n\nDocument data engineering processes , architectures, and configurations.\n\n\n\n\nBasic Requirements:\n\n\nA minimum of a BS degree in computer science, software engineering, or related scientific discipline.\n\nA minimum of 3 years of experience in software development\n\nStrong programming skills in languages such as Python/Pyspark, SQL\n\nExperience with Delta lake, Unity Catalog, Delta Sharing, Delta live tables(DLT)\n\nExperience with data pipeline orchestration tools - Airflow, Databricks Workflows\n\n1 year of experience working in AWS cloud computing environment, preferably with Lambda, S3, SNS, SQS\n\nUnderstanding of Data Management principles(governance, security, cataloging, life cycle management, privacy, quality)\n\nGood understanding of relational databases.\n\nDemonstrated ability to build software tools in a collaborative, team oriented environment that are product and customer driven.\n\nStrong communication and interpersonal skills\n\nUtilizes source code version control.\n\nHands-on experience with Docker containers and container orchestration.\n\n\n\n\nBonus:\n\n\nHealthcare and medical data experience is a plus.\n\nAdditional experience with modern compiled programming languages (C++, Go, Rust)\n\nExperience building HTTP/REST APIs using popular frameworks\n\nBuilding out extensive automated test suites\n\n\n\n\nBenefits:\n\nWe provide health, vision, and dental coverage for employees\n\n\n\n\n\n\n\nVerana pays 100% of employee insurance coverage and 70% of family\n\nPlus an additional monthly $100 individual / $200 HSA contribution with HDHP\n\n\n\n\n\n\n\n\nSpring Health mental health support\n\nFlexible vacation plans\n\nA generous parental leave policy and family building support through the Carrot app\n\n$500 learning and development budget\n\n$25/wk in Doordash credit\n\nHeadspace meditation app - unlimited access\n\nGympass - 3 free live classes per week + monthly discounts for gyms like Soulcycle\n\n\n\n\nFinal note:\n\nYou do not need to match every listed expectation to apply for this position. Here at Verana, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.\n\n \n\n \n\n \n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Docker, Testing, Cloud and Engineer jobs that are similar:\n\n
$70,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Flywheel \n\n\nFlywheelโs suite of digital commerce solutions accelerate growth across all major digital marketplaces for the worldโs leading brands. We give clients access to near real-time performance measurement and improve sales, share, and profit. With teams across the Americas, Europe, APAC, and China, we offer a career with real impact, endless growth opportunities and the support you need to be the best you can be.\n\n\n\n\nOpportunity \n\n\nWe're looking for a Mid/Senior Data Engineer to join our team. The best candidates will hit the ground running and contribute to our data team as we develop and maintain necessary data automation, reports, ETL/ELT, and quality controls using leading-edge cloud technologies. You will have a deep knowledge and understanding of all stages in the software development life cycle. The ability to self-start, mentor and manage less experienced data engineers, desire to learn new technology, manage multiple priorities, and strong communication are all in your wheelhouse!\n\n\n\nWhat you'll do: \n* Write high-level, well-documented code in Python and SQLBuild data pipelines that range from simple to complex, using technologies like Apache Airflow and AWS Lambda, Step Functions, and EventBridge, and other AWS serverless technologies.\n* Build ETL pipelines with Snowflake, AWS Glue, pyspark and other ETL tools.\n* Work with a mix of structured and unstructured data across cloud-based batch and streaming architectures\n* Engage directly with technical analysts, project managers, and other technical teams to help build concise requirements and ensure timely completion of projects\n* Work with Git, CI/CD, and version control to maintain code and documentation\n* Design and vet solutions for technical problems, and solicit team feedback during the design process\n* Mentor, manage, train, and participate in paired programming in a lead capacity\n\n\n\nWho you are: \n* Must have experience with version control, GitHub, and software development life cycle\n* 4 years experience with SQL and data modeling\n* 4 years experience developing with Python\n* Demonstrated experience interacting with RESTful APIs\n* Experience with data pipelines / batch automation in at least one major technology (e.g. Apache Airflow)\n* Experience with one of the major cloud providers (AWS-preferred)\n* AWS Serverless (lambda, eventbridge, step functions, sqs)Experience working in an agile development environmentStreaming experience (kafka, kinesis, etc.)\n* Familiarity with JiraExperience with other AWS technologies: EC2, Glue, Athena, etc.\n* Experience with additional cloud platforms beyond AWS\n* Experience developing CI/CD, automations, and quality of life improvements for developers\n\n\n\n\n\nWorking at Flywheel\n\n\nWe are proud to offer all Flywheelers a competitive rewards package and unparalleled career growth opportunities and a supportive, fun and engaging culture. \n\n\n๐ We have office hubs across the globe where team members can go to feel productive, inspired, and connected to others\n๐ด Vacation time will depend where you're located \n๐ง Great learning and development opportunities\n๐ Benefits will depend on where you're located\n๐ Volunteering opportunities\n๐ฟ Learn more about us here: Life at Flywheel\n\n\nThe Interview Process:\n\n\nEvery role starts the same, an introductory call with someone from our Talent Acquisition team. We will be looking for company and values-fit as well as your professional experience; there may be some technical role-specific questions during this call.\n\n\nEvery role is different after the initial call, but you can expect to meet several people from the team 1:1 and there might be further skill assessments in the form of a Take Home Assignment/Case Study Presentation or Pair Programming/Live Coding exercise depending on the role. In your initial call, we will walk you through exactly what to expect the process to be.\n\n\nInclusive Workforce\n\n\nAt Flywheel, our goal is to create a culture where individuals of all backgrounds feel comfortable in bringing their authentic selves to work. We want all Flywheel people to feel included and truly empowered to contribute fully to our vision and goals.\n\n\nFlywheel is an Equal Opportunity Employer and participates in E-Verify. Everyone who applies will receive fair consideration for employment. We do not discriminate based upon race, colour, religion, sex, sexual orientation, age, marital status, gender identity, national origin, disability, or any other applicable legally protected characteristics in the location in which the candidate is applying.\n\n\nIf you have any accessibility requirements that would make you more comfortable during the application and interview process, please let us know at [email protected] so that we can support you.\n\n\nPlease note,โฏWe do not accept unsolicited resumes.\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Serverless, Cloud and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout the Role\n\nAs a staff level Data Engineer at Maven Clinic, you will be responsible for driving the technical vision and roadmap for our Data Engineering and Data Platform teams. You will lead the design, implementation, and maintenance of a performant and cost efficient data pipelines (both batch and streaming), as well as contribute to the overall Maven data architecture, with the goal of expanding our data warehouse functionality and uplevel the data analytics and machine learning capabilities. You will work closely with cross-functional teams to ensure the delivery of high-quality data sets and features, that empower our key business and product decision makings, and eventually lead to Mavenโs growth and customer success.\n\nKey Responsibilities\n\n\n* Lead the design, implementation and maintenance of highly performant and cost efficient data pipelines\n\n* In partnership with data platform and backend teams, contribute to the architecture design of a highly scalable and reliable data systems\n\n* Collaborate with product and service engineers in reviewing the modeling of the source datasets, guiding the holistic data vision between various data systems\n\n* Drive technical design discussions in all related data domains and provide guidance to team members on best practices, coding standards, and architecture principles.\n\n* Mentor and guide and mid-level data engineers, helping to develop their technical skills and cultivate a culture of continuous learning and improvement.\n\n* Identify and evaluate emerging technologies, tools, and trends that can drive innovation and improve the efficiency and effectiveness of our data engineering processes.\n\n\n\n\nQualifications\n\n\n* Bachelor's or Master's degree in Computer Science or related field, or equivalent experience.\n\n* Minimum of 7 years of experience in data engineering or relevant backend development, with a proven track record of building highly scalable, performant, and reliable data pipelines.\n\n* Extensive experience in using SQL or similar data processing language to process and analyze data\n\n* Extensive experience with data modeling for large distributed data warehouses (or a similar cloud based) solutions, is able to discuss in-depth about general principles and trade-offs of different modeling approaches.\n\n* Hands-on experience with one of the workflow orchestration tools (eg. Apache Airflow)\n\n* Excellent collaboration and communication skills, with a demonstrated ability to work effectively with cross-functional business partners, especially with teams outside of the engineering group\n\n* Working proficiency in one of the programming languages (Java, Python, Go, etc.) \n\n* Product mindset to embrace business needs and produce scalable data/engineering solutions\n\n* Experience leading technical design discussions and providing guidance on best practices, coding standards, and architecture principles.\n\n* Strong problem-solving and analytical skills, with a proven ability to deliver high-quality code in a fast-paced environment.\n\n\n\n\nAt Maven Clinic, we are committed to building a world-class digital healthcare platform that empowers women and families to live healthier and more fulfilling lives. If you are a seasoned engineer with a passion for building scalable, performant, and reliable systems, and want to make a real impact in the world of healthcare, we would love to hear from you.\n\nFor candidates in NYC or CO, the salary range for this role is $195,000 - $300,000 per year. You may also be entitled to receive a bonus, stock options, and benefits. Individual pay decisions are based on a number of factors, including qualifications for the role, experience level, and skillset. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Cloud, Engineer and Backend jobs that are similar:\n\n
$40,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nNew York City, New York, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Nielsen and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 2 years ago
Data Science is at the core of Nielsenโs business. Our team of researchers come from diverse disciplines and they drive innovation, new product ideation, experimental design and testing, complex analysis and delivery of data insights around the world. We support all International Media clients and are located where our clients are.\n\nLead Data Scientist - Remoteย -ย 101791\nData Scienceย -ย Remoteย \n\nThe Lead Data Scientistโs primary responsibility in the Audio Data Science team is to develop creative solutions to enhance the data and analysis infrastructure and pipeline which underpins the survey quality for all Nielsen Audio survey products. ย In order to deliver high quality standards, the Data Scientist will work as subject matter expert on a team of analysts to establish, maintain and continuously improve data tools and processes supporting the Audio data science team. ย \nTasks will include developing system enhancements, procedural and technological documentation, working with cross functional teams to implement solutions into production systems, supporting survey methodology enhancement projects, and supporting client facing data requests.\n\nWhat will I do?\nMaintain and continuously improve the variety of data infrastructure, analysis, production and QA processes for the Audio Data Science team\nAssist in the transition of the data science tech infrastructure away from legacy systems and methods\nWork with cross-functional teams to implement and validate enhanced audience measurement methodologies\nBuild and refine data queries from large relational databases/data warehouses/data lakes for various analyses and/or requests\nUtilize tools such as Python, Tableau, AWS, Databricks etc. to independently develop, test and implement high quality custom, modular code to perform complex data analysis, visualizations, and answer client queries\nMaintain and update comprehensive documentation on departmental procedures, checklists and metrics\nImplement prevention and detection controls to ensure data integrity, as well as detect and address quality escapes\nWork closely with internal customers and IT personnel to improve current processes and engineer new methods, frameworks and data pipelines\nWork as an integral member of the Audio Data Science team in a time-critical production environment\nKey tasks include โ but are not limited to โ data integration, data harmonization, automation, examining large volumes of data, identifying & implementing methodological, process & technology improvements\nDevelop and maintain the underlying infrastructure to support forecasting & statistical models, machine learning solutions, big data pipelines (from internal and external sources) used in a production environment\n\nIs this for me?\nUndergraduate or graduate degree in mathematics, statistics, engineering, computer science, economics, business or fields that employ rigorous data analysis\nMust be proficient with Python (and Spark/Scala) to develop sharable software with the appropriate technical documentation\nExperience utilizing Gitlab, Git or similar to manage code development\nExperience utilizing Apache Spark, Databricks & Airflow\nExpertize with Tableau, or other data visualization software and techniques\nExperience in containerization such as Docker and/or Kubernetes\nExpertize in querying large datasets with SQL and of working with Oracle, Netezza, Data Warehouse and Data Lake data structures\nExperience in leveraging CI/CD pipelines\nExperience utilizing cloud computing platforms such as AWS, Azure, etc\nStrong ability to proactively gather information, work independently as well as within an multi disciplinary team\nE- Proficiency in MS Office suite (Excel, Access, PowerPoint and Word) and/or Google Office Apps (Sheets, Docs, Slides, Gmail)\n\nPreferred\nKnowledge of machine learning and data modeling techniques such as Time Series, Decision Trees, Random Forests, SVM, Neural Networks, Incremental Response Modeling, and Credit Scoring\nKnowledge of survey sampling methodologies\nKnowledge of statistical tests and procedures such as ANOVA, Chi-squared, Correlation, Regression, etc\n#LI-SF1\n\nABOUT NIELSEN\nAs the arbiter of truth, Nielsen Global Media fuels the media industry with unbiased, reliable data about what people watch and listen to. To discover whatโs true, we measure across all channels and platformsโ โfrom podcasts to streaming TV to social media. And when companies and advertisers are armed with the truth, they have a deeper understanding of their audiences and can accelerate growth.ย \nDo you want to move the industry forward with Nielsen? Our people are the driving force. Your thoughts, ideas and expertise can propel us forward. Whether you have fresh thinking around maximizing a new technology or you see a gap in the market, we are here to listen and take action. Our team is made strong by a diversity of thoughts, experiences, skills, and backgrounds. Youโll enjoy working with smart, fun, curious colleagues, who are passionate about their work. Come be part of a team that motivates you to do your best work!ย ย \n\nNielsen is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Data Science, Executive, Cloud, Git, Python, Engineer and Apache jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.