\nJoin SADA India as a Senior Data Engineer, Enterprise Support service!\n\nYour Mission \n\nAs a Sr. Data Engineer on the Enterprise Support service team at SADA, you will reduce customer anxiety about running production workloads in the cloud by implementing and iteratively improving observability and reliability. You will have the opportunity to engage with our customers in a meaningful way by defining, measuring, and improving key business metrics; eliminating toil through automation; inspecting code, design, implementation, and operational procedures; enabling experimentation by helping create a culture of ownership; and winning customer trust through education, skill sharing, and implementing recommendations. Your efforts will accelerate our customersโ cloud adoption journey and we will be with them through the transformation of their applications, infrastructure, and internal processes. You will be part of a new social contract between customers and service providers that demands shared responsibility and accountability: our partnership with our customers will ensure we are working towards a common goal and share a common fate.\n\nThis is primarily a customer-facing role. You will also work closely with SADAโs Customer Experience team to execute their recommendations to our customers, and with Professional Services on large projects that require PMO support.\n\nPathway to Success \n\n#MakeThemRave is at the foundation of all our engineering. Our motivation is to provide customers with an exceptional experience in migrating, developing, modernizing, and operationalizing their systems in the Google Cloud Platform.\n\nYour success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured bi-yearly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.\n\nAs you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks.\n\nExpectations\n\nCustomer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives.\n\nOnboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals.\n\nJob Requirements\n\nRequired Credentials:\n\n\n* Google Professional Data Engineer Certified or able to complete within the first 45 days of employment \n\n* A secondary Google Cloud certification in any other specialization\n\n\n\n\nRequired Qualifications: \n\n\n* 4+ years of experience in Cloud support\n\n* Experience in supporting customers preferably in 24/7 environments\n\n* Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)\n\n* Experience writing software in at least two or more languages such as Python, Java, Scala, or Go\n\n* Experience in building production-grade data solutions (relational and NoSQL)\n\n* Experience with systems monitoring/alerting, capacity planning, and performance tuning\n\n* Experience with BI tools like Tableau, Looker, etc will be an advantage\n\n* Consultative mindset that delights the customer by building good rapport with them to fully understand their requirements and provide accurate solutions\n\n\n\n\nUseful Qualifications:\n\n\n* \n\n\n* Mastery in at least one of the following domain areas:\n\n\n\n\n\n* \n\n\n* Google Cloud DataFlow: building batch/streaming ETL pipelines with frameworks such as Apache Beam or Google Cloud DataFlow and working with messaging systems like Pub/Sub, Kafka, and RabbitMQ; Auto scaling DataFlow clusters, troubleshooting cluster operation issues\n\n* Data Integration Tools: building data pipelines using modern data integration tools such as Fivetran, Striim, Data Fusion, etc. Must have hands-on experience configuring and integrating with multiple Data Sources within and outside of Google Cloud\n\n* Large Enterprise Migration: migrating entire cloud or on-prem assets to Google Cloud including Data Lakes, Data Warehouses, Databases, Business Intelligence, Jobs, etc. Provide consultations for optimizing cost, defining methodology, and coming up with a plan to execute the migration.\n\n\n\n\n\n\n\n\n\n* Experience with IoT architectures and building real-time data streaming pipelines\n\n* Experience operationalizing machine learning models on large datasets\n\n* Demonstrated leadership and self-direction -- a willingness to teach others and learn new techniques\n\n* Demonstrated skills in selecting the right statistical tools given a data analysis problem\n\n* Understanding of Chaos Engineering\n\n* Understanding of PCI, SOC2, and HIPAA compliance standards\n\n* Understanding of the principle of least privilege and security best practices\n\n* Understanding of cryptocurrency and blockchain technology\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nThiruvananthapuram, Kerala, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Senior AI Infra Engineer AI ML and Data Infrastructure
The Team\n\nAcross our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central team provides the support needed to push this work forward. \n\nThe Central team at CZI consists of our Finance, People & DEI, Real Estate, Events, Workplace, Facilities, Security, Brand & Communications, Business Systems, Central Operations, Strategic Initiatives, and Ventures teams. These teams provide strategic support and operational excellence across the board at CZI.\nThe Opportunity\n\nBy pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive solutions. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. Our technology team is already helping schools bring personalized learning tools to teachers and schools across the country. We are also supporting scientists around the world as they develop a comprehensive reference atlas of all cells in the human body, and are developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. \n\nThe AI/ML and Data Engineering Infrastructure organization works on building shared tools and platforms to be used across all of the Chan Zuckerberg Initiative, partnering and supporting the work of a wide range of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale. A person in this role will build these technology solutions and help to cultivate a culture of shared best practices and knowledge around core engineering.\nWhat You'll Do\n\n\n* Participate in the technical design and building of efficient, stable, performant, scalable and secure AI/ML and Data infrastructure engineering solutions.\n\n* Active hands-on coding working on our Deep Learning and Machine Learning models\n\n* Design and implement complex systems integrating with our large scale AI/ML GPU compute infrastructure and platform, making working across multiple clouds easier and convenient for our Research Engineers, ML Engineers, and Data Scientists. \n\n* Use your solid experience and skills in building containerized applications and infrastructure using Kubernetes in support of our large scale GPU Research cluster as well as working on our various heterogeneous and distributed AI/ML environments. \n\n* Collaborate with other team members in the design and build of our Cloud based AI/ML platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes.\n\n* Collaborate with our partners on data management solutions in our heterogeneous collection of complex datasets.\n\n* Help build tooling that makes optimal use of our shared infrastructure in empowering our AI/ML efforts with world class GPU Compute Cluster and other compute environments such as our AWS based services.\n\n\n\nWhat You'll Bring\n\n\n* BS or MS degree in Computer Science or a related technical discipline or equivalent experience\n\n* 5+ years of relevant coding experience\n\n* 3+ years of systems Architecture and Design experience, with a broad range of experience across Data, AI/ML, Core Infrastructure, and Security Engineering\n\n* Scaling containerized applications on Kubernetes or Mesos, including expertise with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes or Mesos. (Kubernetes preferred)\n\n* Proficiency with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, and experience with On-Prem and Colocation Service hosting environments\n\n* Proven coding ability with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala\n\n* Shown ability with a scripting language such as Python, PHP, or Ruby\n\n* AI/ML Platform Operations experience in an environment with challenging data and systems platform challenges - including large scale Kafka and Spark deployments (or their coralaries such as Pulsar, Flink, and/or Ray) as well as Workflow scheduling tools such as Apache Airflow, Dagster, or Apache Beam \n\n* MLOps experience working with medium to large scale GPU clusters in Kubernetes (Kubeflow), HPC environments, or large scale Cloud based ML deployments\n\n* Working knowledge of Nvidia CUDA and AI/ML custom libraries. \n\n* Knowledge of Linux systems optimization and administration\n\n* Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.\n\n* PyTorch, Karas, or Tensorflow experience a strong nice to have\n\n* HPC with and Slurm experience a strong nice to have\n\n\n\nCompensation\n\nThe Redwood City, CA base pay range for this role is $190,000 - $285,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside Redwood City are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.\nBenefits for the Whole You \n\nWeโre thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. \n\n\n* CZI provides a generous employer match on employee 401(k) contributions to support planning for the future.\n\n* Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs.\n\n* CZI Life of Service Gifts are awarded to employees to โlive the missionโ and support the causes closest to them.\n\n* Paid time off to volunteer at an organization of your choice. \n\n* Funding for select family-forming benefits. \n\n* Relocation support for employees who need assistance moving to the Bay Area\n\n* And more!\n\n\n\nCommitment to Diversity\n\nWe believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. Learn about our diversity, equity, and inclusion efforts. \n\nIf youโre interested in a role but your previous experience doesnโt perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.\n\nExplore our work modes, benefits, and interview process at www.chanzuckerberg.com/careers.\n\n#LI-Remote \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$42,500 — $82,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRedwood City, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nCAPCO POLAND \n\n*We are looking for Poland based candidate. The job is remote but may require some business trips.\n\nJoining Capco means joining an organisation that is committed to an inclusive working environment where youโre encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. Itโs important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table โ so weโd love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help.\n\nCapco Poland is a global technology and management consultancy specializing in driving digital transformation across the financial services industry. We are passionate about helping our clients succeed in an ever-changing industry.\n\nWe also are experts in focused on development, automation, innovation, and long-term projects in financial services. In Capco, you can code, write, create, and live at your maximum capabilities without getting dull, tired, or foggy.\n\nWe're seeking a skilled Mid Big Data Engineer to join our Team. The ideal candidate will be responsible for designing, implementing and maintaining scalable data pipelines and solutions on on-prem / migration / cloud projects for large scale data processing and analytics.\n\n \n\nTHINGS YOU WILL DO\n\n\n* Design, develop and maintain robust data pipelines using Scala or Python, Spark, Hadoop, SQL for batch and streaming data processing\n\n* Collaborate with cross-functional teams to understand data requirements and design efficient solutions that meet business needs \n\n* Optimize Spark jobs and data processing workflows for performance, scalability and reliability\n\n* Ensure data quality, integrity and security throughout the data lifecycle\n\n* Troubleshoot and resolve data pipeline issues in a timely manner to minimize downtime and impact on business operations\n\n* Stay updated on industry best practices, emerging technologies, and trends in big data processing and analytics\n\n* Document, design specifications, deployment procedures and operational guidelines for data pipelines and systems\n\n* Provide technical guidance and mentorship for new joiners\n\n\n\n\n \n\nTECH STACK: Python or Scala, OOP, Spark, SQL, Hadoop\n\nNice to have: GCP, Pub/Sub, Big Query, Kafka, Juniper, Apache NiFi, Hive, Impala, Cloudera, CI/CD\n\n \n\nSKILLS & EXPERIENCES YOU NEED TO GET THE JOB DONE\n\n\n* min. 3-4 years of experience as a Data Engineer/Big Data Engineer\n\n* University degree in computer science, mathematics, natural sciences, or similar field and relevant working experience\n\n* Excellent SQL skills, including advanced concepts\n\n* Very good programming skills in Python or Scala\n\n* Experience in Spark and Hadoop\n\n* Experience in OOP\n\n* Experience using agile frameworks like Scrum\n\n* Interest in financial services and markets\n\n* Nice to have: experience or knowledge with GCP \n\n* Fluent English communication and presentation skills\n\n* Sense of humor and positive attitude\n\n\n\n\n \n\nWHY JOIN CAPCO?\n\n\n* Employment contract and/or Business to Business - whichever you prefer\n\n* Possibility to work remotely\n\n* Speaking English on daily basis, mainly in contact with foreign stakeholders and peers\n\n* Multiple employee benefits packages (MyBenefit Cafeteria, private medical care, life-insurance)\n\n* Access to 3.000+ Business Courses Platform (Udemy)\n\n* Access to required IT equipment\n\n* Paid Referral Program\n\n* Participation in charity events e.g. Szlachetna Paczka\n\n* Ongoing learning opportunities to help you acquire new skills or deepen existing expertise\n\n* Being part of the core squad focused on the growth of the Polish business unit\n\n* A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients\n\n* A work culture focused on innovation and creating lasting value for our clients and employees\n\n\n\n\n \n\nONLINE RECRUITMENT PROCESS STEPS*\n\n\n* Screening call with the Recruiter\n\n* Technical interview: first stage\n\n* Client Interview\n\n* Feedback/Offer\n\n\n\n\n*The recruitment process may be modified \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$57,500 — $92,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nKrakรณw, Lesser Poland Voivodeship, Poland
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nThe Mortgage Engineering team is seeking a highly skilled and experienced Senior Backend Engineer with a strong focus on microservices architecture to join our team. The ideal candidate will be proficient in Java, and possess in-depth knowledge of Kafka, SQS, Redis, Postgres, Grafana, and Kubernetes. You are an expert in working with and scaling event-driven systems, webhooks, RESTful APIs and solving challenges with concurrency and distributed systems. As a Senior Backend Engineer at Ocrolus, you will be responsible for designing, developing, and maintaining highly scalable and reliable backend systems. You will work closely with product managers, designers, and other engineers to ensure our services meet the highest standards of performance and reliability, specifically tailored to the needs of the mortgage industry.\n\nKey Responsibilities:\n\n\n* Design, develop, and maintain backend services and microservices architecture using Java.\n\n* Implement event-driven systems utilizing Kafka and AWS SQS for real-time data processing and messaging.\n\n* Optimize and manage in-memory data stores with Redis for high-speed caching and data retrieval.\n\n* Develop and maintain robust database solutions with Postgres, ensuring data integrity and performance with PgAnalyze.\n\n* Deploy, monitor, and manage containerized applications using Kubernetes and Terraform and ensure its scalability and resilience and our manage cloud infrastructure.\n\n* Collaborate closely with product managers and designers to understand requirements and deliver technical solutions that meet business needs.\n\n* Develop and maintain RESTful APIs and gRPC services to support seamless integration with frontend applications and third-party services.\n\n* Ensure secure and efficient authentication and authorization processes using OAuth.\n\n* Manage codebases in a monorepo environment using Bazel for build automation.\n\n* Troubleshoot and resolve client support issues in a timely manner, ensuring minimal disruption to service.\n\n* Continuously explore and implement new technologies and frameworks to improve system performance and efficiency.\n\n* Write and maintain technical documentation on Confluence to document technical plans and processes, and facilitate knowledge sharing across the team.\n\n* Mentor junior engineers and contribute to the overall growth and development of the engineering team.\n\n\n\n\nRequired Qualifications:\n\n\n* Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field.\n\n* 5+ years of professional experience in backend development with a focus on microservices.\n\n* Proficiency in Java, with a strong preference for expertise in Java and the Spring framework.\n\n* Strong experience with Apache Kafka for building event-driven architectures.\n\n* Hands-on experience with AWS SQS for message queuing and processing.\n\n* Expertise in Redis for caching and in-memory data management.\n\n* Solid understanding of Postgres or other relational databases, including performance tuning, migrations, and optimization.\n\n* Proven experience with Kubernetes for container orchestration and management.\n\n* Proficiency in developing and consuming RESTful APIs and gRPC services.\n\n* Proficiency with command line and Git for version control and Github for code reviews.\n\n* Familiarity with OAuth for secure authentication and authorization.\n\n* Strong understanding of software development best practices, including version control, testing, and CI/CD automation.\n\n* Excellent problem-solving skills and the ability to work independently and as part of a team.\n\n* Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Experience working in the mortgage and fintech industries, with a deep understanding of domain-specific challenges and B2B SaSS requirements.\n\n* Experience managing codebases in a monorepo environment with Bazel for build automation.\n\n* Understanding of security best practices and implementation in microservices.\n\n* Experience with performance monitoring and logging tools such as Grafana, Sentry, PgAnalyze, Prometheus, and New Relic.\n\n* Familiarity with cloud platforms such as AWS.\n\n* Familiarity with Python.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Redis, Java, Cloud, Git, Senior, Junior, Engineer and Backend jobs that are similar:\n\n
$65,000 — $115,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nGurgaon, Haryana, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nJoin SADA India as a Senior Data Engineer!\n\nYour Mission \n\nAs a Sr. Data Engineer on the Enterprise Support service team at SADA, you will reduce customer anxiety about running production workloads in the cloud by implementing and iteratively improving observability and reliability. You will have the opportunity to engage with our customers in a meaningful way by defining, measuring, and improving key business metrics; eliminating toil through automation; inspecting code, design, implementation, and operational procedures; enabling experimentation by helping create a culture of ownership; and winning customer trust through education, skill sharing, and implementing recommendations. Your efforts will accelerate our customersโ cloud adoption journey and we will be with them through the transformation of their applications, infrastructure, and internal processes. You will be part of a new social contract between customers and service providers that demands shared responsibility and accountability: our partnership with our customers will ensure we are working towards a common goal and share a common fate.\n\nThis is primarily a customer-facing role. You will also work closely with SADAโs Customer Experience team to execute their recommendations to our customers, and with Professional Services on large projects that require PMO support.\n\nPathway to Success \n\n#MakeThemRave is at the foundation of all our engineering. Our motivation is to provide customers with an exceptional experience in migrating, developing, modernizing, and operationalizing their systems in the Google Cloud Platform.\n\nYour success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured bi-yearly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.\n\nAs you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks.\n\nExpectations\n\nCustomer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives.\n\nOnboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals.\n\nJob Requirements\n\nRequired Credentials:\n\n\n* Google Professional Data Engineer Certified or able to complete within the first 45 days of employment \n\n* A secondary Google Cloud certification in any other specialization\n\n\n\n\nRequired Qualifications: \n\n\n* 5+ years of experience in Cloud support\n\n* Experience in supporting customers preferably in 24/7 environments\n\n* Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)\n\n* Experience writing software in at least two or more languages such as Python, Java, Scala, or Go\n\n* Experience in building production-grade data solutions (relational and NoSQL)\n\n* Experience with systems monitoring/alerting, capacity planning, and performance tuning\n\n* Experience with BI tools like Tableau, Looker, etc will be an advantage\n\n* Consultative mindset that delights the customer by building good rapport with them to fully understand their requirements and provide accurate solutions\n\n\n\n\nUseful Qualifications:\n\n\n* \n\n\n* Mastery in at least one of the following domain areas:\n\n\n\n\n\n* \n\n\n* Google Cloud DataFlow: building batch/streaming ETL pipelines with frameworks such as Apache Beam or Google Cloud DataFlow and working with messaging systems like Pub/Sub, Kafka, and RabbitMQ; Auto scaling DataFlow clusters, troubleshooting cluster operation issues\n\n* Data Integration Tools: building data pipelines using modern data integration tools such as Fivetran, Striim, Data Fusion, etc. Must have hands-on experience configuring and integrating with multiple Data Sources within and outside of Google Cloud\n\n* Large Enterprise Migration: migrating entire cloud or on-prem assets to Google Cloud including Data Lakes, Data Warehouses, Databases, Business Intelligence, Jobs, etc. Provide consultations for optimizing cost, defining methodology, and coming up with a plan to execute the migration.\n\n\n\n\n\n\n\n\n\n* Experience with IoT architectures and building real-time data streaming pipelines\n\n* Experience operationalizing machine learning models on large datasets\n\n* Demonstrated leadership and self-direction -- a willingness to teach others and learn new techniques\n\n* Demonstrated skills in selecting the right statistical tools given a data analysis problem\n\n* Understanding of Chaos Engineering\n\n* Understanding of PCI, SOC2, and HIPAA compliance standards\n\n* Understanding of the principle of least privilege and security best practices\n\n* Understanding of cryptocurrency and blockchain technology\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nThiruvananthapuram, Kerala, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Who We Are:\n\nFounded in 2018 by engineers from Stanford, Cisco Meraki, and Samsara, Spot AI is the fastest growing Video Intelligence company in the U.S. We are upending the $30 billion video surveillance market with an AI Camera System to help people at work create safer, smarter organizations. In the process, weโre disrupting video security to create a new category of Video Intelligence.\n\nWe are experiencing tremendous growth and are deployed at thousands of locations across businesses in 17 different verticals, ranging from local businesses to Fortune 500s. Our customers range from warehousing and healthcare to nonprofits and car washes including SpaceX, ExtraSpace Storage, WineDirect, YMCA, and Veg Fresh Farms.\n\nWeโve recently raised $40M Series B financing to continue to transform how organizations use their video footage. Weโre backed by Scale Venture Partners, Redpoint Ventures, Bessemer Venture Partners, StepStone Group, and MVP Ventures.\n\nWho You Are:\n\nYouโre a self-driven, highly skilled AI Infrastructure Engineer with expertise in designing, architecting, and productizing computer vision or other AI systems in cloud and on edge. You excel at building and shipping infrastructure and tooling for computer vision or other AI products in production, and bring a combination of passion and creativity along with a pragmatic approach. You have the skills to architect distributed AI systems that can handle millions of hours of customer footage. You know how to work cross functionally to meet the needs of multiple stakeholders, and you have a track record of success.\n\nNote: candidates for this role must be geographically located in the United States.\nWhat Excites You:\n\n\n\n* Working and mutually learning from an extremely high caliber engineering team\n\n* Developing real-time, high availability video intelligence systems in cloud and on edge, with lots of ownership and opportunities to implement novel solutions\n\n* Delivering video intelligence infrastructure and tools in ways that are thoughtful about end user privacy, such as edge inference and federated training\n\n* Designing, owning and scaling end-to-end edge and cloud AI platforms\n\n* Operating with autonomy in a fast paced environment\n\n* Communicating directly with customers to better understand their use-cases for iterating on future product roadmap\n\n\n\nWhat Gets Our Attention:\n\n\n* 3+ years of experience building and productizing AI infrastructure at scale both in cloud AND on edge\n\n* Track record of delivering outstanding results\n\n* Strong engineering fundamentals with various back-end technologies\n\n* Expertise leveraging leading cloud AI platforms, such as GCP, Azure, or AWS for real-time inferencing\n\n* Experience with big data processing technologies and databases, such as Apache Beam, Hadoop, Spark, Kafka, Pub/Sub, Dataflow, BigQuery or Bigtable\n\n* Experience with real-time, edge AI systems with limited memory and compute resources\n\n* Experience building MLOps tools for automated testing, training and deployment\n\n* Outstanding communication skills\n\n* Ability to prioritize and synthesize engineering tasks with first principles thinking\n\n* Ability to rapidly pick up new technologies and techniques and implement them\n\nBachelor's degree in computer science or a related field; master's degree or higher preferred\n\n\n\nWhatโs In It For You:\n\n\nBase Salary Range: $123,000 - $190,000. Offered salary will be based on the offered candidateโs demonstrated attributes and competencies related to the role, as well as the offered candidateโs geographical location within the United States. Your recruiting partner can share more details about the compensation range as it relates to your geography once your interview process has begun\n\nGenerous early stage equity\n\nMedical, dental and vision plan options \n\n401K with employer match\n\nFlexible and supportive time off practices, including self-managed PTO and a generous new parent leave policy\n\nLearning and development opportunities\n\nRemote work flexibility, including a stipend to set up your ideal home office\n\n\n\n\nWhat We Value:\n\nWe operate under a trio of company values:\n\nCustomer First, Always. We are relentlessly curious about our customerโs goals, and seek the simplest solutions to solve their problems.\n\nOwn Your Outcomes. We bias towards action, move fast, and iterate. Everyone on our team is empowered to make decisions.\n\nItโs a team effort. We help each other succeed. We leverage each otherโs strengths to accomplish big goals together.\n\n\n\nAnd, we are creating and cultivating a diverse and inclusive culture where we celebrate individuals for what they accomplish, no matter who they are! As an equal opportunity employer, we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.\n\nCome join our journey! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Video, Cloud, Senior, Excel and Engineer jobs that are similar:\n\n
$50,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for ZOE and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 1 year ago
\nWe are redefining how people approach their health\n\n\nZOE is combining scientific research at a scale never before imagined and cutting-edge AI to improve the health of millions. \n\n\nCreated by the worldโs top scientists, our personalised nutrition program is reimagining a fundamental human need โ eating well for your own body. Currently available in the US and the UK, ZOE is already helping > 100k ZOE members to adopt healthier habits and live better. Our work and expertise in biology, engineering, data science, and nutrition science have led to multiple breakthrough papers in leading scientific journals such as Nature Medicine, Science, The Lancet, and more.\n\n\nTo learn more, head to Spotify, Apple Podcasts, or Audible to listen to our Science & Nutrition Podcast (with 3 million listens!) \n\n\nA remote-first, high-growth startup, we are backed by founders, investors, and entrepreneurs who have built multi-billion dollar technology companies. We are always looking for innovative thinkers and builders to join our team on a thrilling mission to tackle epic health problems. Together, we can improve human health and touch millions of lives. \n\n\nWe value inclusivity, transparency, ownership, open-mindedness and diversity. We are passionate about delivering great results and learning in the open. We want our teams to have the freedom to make long-term, high-impact decisions, and the well-being of our teammates and the people around us is a top priority.\n\n\nCheck out what life is like for our tech team on ZOE Tech. \n\n\nWeโre looking for a Senior Data Engineer to take ZOE even further.\n\n\nAbout the team\n\n\nThe mission of the Core Science team is to transform research trials and data into personalised actionable recommendations that reach our members. We are currently developing a feedback loop to measure the efficacy of ZOE's nutrition and health advice, which will drive the evolution of our recommendations. In addition, the team is conducting supplementary studies in key areas, such as the microbiome, and constructing a platform to facilitate these trials alongside the main product. The team also maintains close collaboration with other stream-aligned teams to deliver scientific discoveries directly to the app.\n\n\nWe operate in a very dynamic and rewarding environment, where we work closely with all sorts of stakeholders to find the best solutions for both the business and our potential customers. Our agile, cross-functional teams use continuous delivery and regular feedback to ensure we deliver value to our customers on a daily basis. Our systems make use of Python, dbt, Apache Airflow, Kotlin, Typescript, React, and FastAPI. We deploy and operate our software using Kubernetes and ML models using VertexAI in GCP. \n\n\nAbout the role\n\n\nAs a Senior data engineer in the Core Science team, you will be working with scientists, data scientists and other engineers to build a platform that empowers our team to conduct scientific research trials and improve the efficacy of ZOEโs nutrition and health advice. Every line of code you write will be a catalyst for groundbreaking discoveries.\n\n\nIn this role, you will also have the opportunity to make a significant impact on the personal and professional development of our team by providing guidance, support, and expertise. You will play a crucial role in helping individuals achieve their goals, overcome challenges, and maximise their potential.\n\n\n\nYou'll be\n* Defining the data requirements from the research trials that the core Science team will run alongside data coming from the main product experience.\n* Automating data collection from a variety of sources (e.g. labs, questionnaires, study coordination tools).orchestrating integration of data derived from these trials into our data warehouse. \n* Coordinating with different product teams to ensure a seamless App experience for both study participants and paid customers. \n* Ensuring consistency and accuracy of all study data used for research and product development. \n* Conducting exploratory data analysis to understand data patterns and trends. \n* Creating algorithms and ML models when necessary. \n* Ensuring data security and compliance with regulatory standards.\n* Ensuring data accessibility to internal and external stakeholders with up-to-date documentation on data sources and schemes. \n\n\n\nWe think youโll be great if you...\n* +6 years of experience in data engineering roles, with a proven track record of working on data integration, ETL processes, and data warehousing \n* Are proficient in Python and SQL and have experience with Data Warehouses (ex: BigQuery, SnowFlake) and interactive computing environments like Jupyter Notebooks.\n* Have knowledge of data governance principles and best practices for ensuring data quality, security, and compliance with regulatory standards.\n* Are detail-oriented and data-savvy to ensure the accuracy and reliability of the data. \n* Are someone who strives to keep their code clean, tests complete and maintained, and their releases frequent.\n* Have experience with cloud platforms like Google Cloud Platform (GCP) and platforms to schedule and monitor data workflows like Apache Airflow.\n* Have a solid understanding of best practices around CI/CD, containers and what a great release process looks like. \n* Have the ability to collaborate effectively with cross-functional teams and communicate technical concepts to non-technical stakeholders.\n* Have a mindset of collaboration, innovation, and a passion for contributing to groundbreaking scientific discoveries.\n\n\n\nNice to have\n* Have experience with dbt and Apache Airflow. \n* Have experience with ML modelling and ML Ops. \n* Have experience with privacy-preserving technologies such as federated learning and data synthesis. \n\n\n\n\n\nThese are the ideal skills, attributes, and experience weโre looking for in this role. Donโt worry if you donโt tick all the boxes, especially on the skills and experience front, weโre happy to upskill for the right candidate. \n\n\nLife as a ZOEntist โ what you can expect from us:\nAs well as industry-benchmarked compensation and all the hardware and software you need, we offer a thoughtfully-curated list of benefits. We expect this list to evolve as we continue supporting our team membersโ long-term personal and professional growth, and their wellbeing. \n\n\nRemote-first: Work flexibly โ from home, our London office, or anywhere within the EU \nStock options: So you can share in our growth \nPaid time off: 28 days paid leave (25 holiday days, plus 2 company-wide reset days, and 1 โlife eventโ day) \nEnhanced Parental Leave: On top of the statutory offering\nFlexible private healthcare and life assurance options\nPension contribution: Pay monthly or top up โ your choice. \nHealth and wellbeing: Like our Employee Assistance Program and Cycle to Work Scheme\nSocial, WFH, and Growth (L&D) budgets. Plus, multiple opportunities to connect, grow, and socialise \n\n\nWeโre all about equal opportunities \nWe know that a successful team is made up of diverse people, able to be their authentic selves. To continue growing our team in the best way, we believe that equal opportunities matter, so we encourage candidates from any underrepresented backgrounds to apply for this role. You can view our Equal Opportunities statement in full here. \n\n\nA closer look at ZOE \nThink youโve heard our name somewhere before? We were the team behind the COVID Symptom Study, which has since become the ZOE Health Study (ZHS). We use the power of community science to conduct large-scale research from the comfort of contributorsโ own homes. Our collective work and expertise in biology, engineering, and data/nutrition science have led to multiple breakthrough papers in leading scientific journals such as Nature Medicine, Science, The Lancet, and more.\n\n\nSeen ZOE in the media recently? Catch our co-founder Professor Tim Spector (one of the worldโs most cited scientists) and our Chief Scientist Dr Sarah Berry on this BBC Panorama, and listen to CEO Jonathan Wolf unpack the latest in science and nutrition on our ZOE podcast. \n\n\nOh, and if youโre wondering why ZOE? It translates to โLifeโ in Greek, which weโre helping ZOE members enjoy to the fullest. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nUK/EU or compatible timezone (Remote)
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.