\nWhy TrueML?\n \nTrueML is a mission-driven financial software company that aims to create better customer experiences for distressed borrowers. Consumers today want personal, digital-first experiences that align with their lifestyles, especially when it comes to managing finances. TrueMLโs approach uses machine learning to engage each customer digitally and adjust strategies in real time in response to their interactions.\n \nThe TrueML team includes inspired data scientists, financial services industry experts and customer experience fanatics building technology to serve people in a way that recognizes their unique needs and preferences as human beings and endeavoring toward ensuring nobody gets locked out of the financial system.\n\n\nAbout the Role:\n\n\nAs a Senior Data Engineer II, you will play a pivotal role in designing, building, and maintaining our cutting-edge data LakeHouse platform. You will leverage open table formats like Apache Iceberg to create scalable, reliable data solutions that enable optimized query performance across a broad spectrum of analytical workloads and emerging data applications. In this role, you'll develop and operate robust data pipelines, integrating diverse source systems and implementing efficient data transformations for both batch and streaming data.\n\n\n\nWork-Life Benefits\n* Unlimited PTO\n* Medical benefit contributions in congruence with local laws and type of employment agreement\n\n\n\nWhat you'll do:\n* Building Data LakeHouse: In the Senior Data Engineer II role, you will design, build, and operate robust data lakehouse solutions utilizing open table formats like Apache Iceberg. Your focus will be on delivering a scalable, reliable data lakehouse with optimized query performance for a wide range of analytical workloads and emerging data applications.\n* Pipeline and Transformation: Integrate with diverse source systems and construct scalable data pipelines. Implement efficient data transformation logic for both batch and streaming data, accommodating various data formats and structures.\n* Data Modeling: Analyze business requirements and profile source data to design, develop, and implement robust data models and curated data products that power reporting, analytics, and machine learning applications.\n* Data Infrastructure: Develop and manage a scalable AWS cloud infrastructure for the data platform, employing Infrastructure as Code (IaC) to reliably support diverse data workloads. Implement CI/CD pipelines for automated, consistent, and scalable infrastructure deployments across all environments, adhering to best practices and company standards.\n* Monitoring and Maintenance: Monitor data workloads for performance and errors, and troubleshoot issues to maintain high levels of data quality, freshness, and adherence to defined SLAs.\n* Collaboration: Collaborate closely with Data Services and Data Science colleagues to drive the evolution of our data platform, focusing on delivering solutions that empower data users and satisfy stakeholder needs throughout the organization.\n\n\n\nA successful candidate will have:\n* Bachelor's degree in Computer Science, Engineering, or a related technical field (Master's degree is a plus).\n* 5+ years of hands-on engineering experience (software or data), with a strong emphasis on 3+ years in data-focused roles.\n* Experience implementing data lake and data warehousing platforms.\n* Strong Python and SQL skills applied to data engineering tasks.\n* Proficiency with the AWS data ecosystem, including services like S3, Glue Catalog, IAM, and Secrets Manager.\n* Experience with Terraform and Kubernetes.\n* Track record of successfully building and operationalizing data pipelines.\n* Experience working with diverse data stores, particularly relational databases.\n\n\n\nYou might also have:\n* Experience with Airflow, DBT, and Snowflake. \n* Certification in relevant technologies or methodologies.\n* Experience with streaming processing technology, e.g., Flink, Spark Streaming.\n* Familiarity with Domain-Driven Design principles and event-driven architectures.\n* Certification in relevant technologies or methodologies.\n\n\n\n\n\n$62,000 - $77,000 a yearCompensation Disclosure: This information reflects the anticipated base salary range for this position based on current national data. Minimums and maximums may vary based on location. Individual pay is based on skills, experience, and other relevant factors.\n\nThis role is only approved to hire within the following LatAm countries: Mexico, Argentina, or Dominican Republic.\n\n\nWe are a dynamic group of people who are subject matter experts with a passion for change. Our teams are crafting solutions to big problems every day. If youโre looking for an opportunity to do impactful work, join TrueML and make a difference.\n\n\nOur Dedication to Diversity & Inclusion\n \nTrueML is an equal-opportunity employer. We promote, value, and thrive with a diverse & inclusive team. Different perspectives contribute to better solutions, and this makes us stronger every day. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $135,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout AllTrails\n\n\nAllTrails is the most trusted and used outdoors platform in the world. We help people explore the outdoors with hand-curated trail maps along with photos, reviews, and user recordings crowdsourced from our community of millions of registered hikers, mountain bikers, and trail runners in 150 countries. AllTrails is frequently ranked as a top-5 Health and Fitness app and has been downloaded by over 75 million people worldwide.\n\n\nEvery day, we solve incredibly hard problems so that we can get more people outside having healthy, authentic experiences and a deeper appreciation of the outdoors. Join us! \n\n\nThis is a U.S.-based remote position. San Francisco Bay Area employees are highly encouraged to come into the office one day a week.\n\n\n\nWhat Youโll Be Doing:\n* Work cross-functionally to ensure data scientists have access to clean, reliable, and secure data, the backbone for new algorithmic product features\n* Build, deploy, and orchestrate large-scale batch and stream data pipelines to transform and move data to/from our data warehouse and other systems\n* Deliver scalable, testable, maintainable, and high-quality code\n* Investigate, test-for, monitor, and alert on inconsistencies in our data, data systems, or processing costs\n* Create tools to improve data and model discoverability and documentation\n* Ensure data collection and storage adheres to GDPR and other privacy and legal compliance requirements\n* Uphold best data-quality standards and practices, promoting such knowledge throughout the organization\n* Deploy and build systems that enable machine learning and artificial intelligence product solutions\n* Mentoring others on best industry practices\n\n\n\nRequirements:\n* Minimum of 6 years of experience working in data engineering\n* Expertise both in using SQL and Python for data cleansing, transformation, modeling, pipelining, etc.\n* Proficient in working with other stakeholders and converting requirements into detailed technical specifications; owning and leading projects from inception to completion\n* Proficiency in working with high volume datasets in SQL-based warehouses such as BigQuery\n* Proficiency with parallelized python-based data processing frameworks such as Google Dataflow (Apache Beam), Apache Spark, etc.\n* Experience using ELT tools like Dataform or dbt\n* Professional experience maintaining data systems in GCP and AWS\n* Deep understanding of data modeling, access, storage, caching, replication, and optimization techniques\n* Experienced with orchestrating data pipelines and Kubernetes-based jobs with Apache Airflow\n* Understanding of the software development lifecycle and CI/CD\n* Monitoring and metrics-gathering (e.g. Datadog, NewRelic, Cloudwatch, etc)\n* Willingness to participate in a weekly on-call support rotation - currently the rotation is monthly\n* Proficiency with git and working collaboratively in a shared codebase\n* Excellent documentation skills\n* Self motivation and a deep sense of pride in your work\n* Passion for the outdoors\n* Comfort with ambiguity, and an instinct for moving quickly\n* Humility, empathy and open-mindedness - no egos\n\n\n\nBonus Points:\n* Experience working in a multi-cloud environment\n* Experience with GIS, H3, or other mapping technologies\n* Experience with Amplitude\n* Experience with infrastructure-as-code, such as Terraform\n* Experience with machine learning frameworks and platforms such as VertexAI, SageMaker, MLFlow, or related frameworks\n\n\n\nWhat We Offer: \n* A competitive and equitable compensation plan. This is a full-time, salaried position that includes equity\n* Physical & mental well-being including health, dental and vision benefits\n* Trail Days: No meetings first Friday of each month to go test the app and explore new trails!\n* Unlimited PTO\n* Flexible parental leave \n* Remote employee equipment stipend to create a great remote work environment\n* Annual continuing education stipend\n* Discounts on subscription and merchandise for you and your friends & family\n* An authentic investment in you as a human being and your career as a professional\n\n\n\n\n\n$170,000 - $210,000 a yearThe successful candidateโs starting salary will be determined based on various factors such as skills, experience, training and credentials, as well as other business purposes or needs. It is not typical for a candidate to be hired at or near the top of the range of their role and compensation decisions are dependent on the factors and circumstances of each case.\n\nNature celebrates you just the way you are and so do we! At AllTrails weโre passionate about nurturing an inclusive workplace that values diversity. Itโs no secret that companies that are diverse in background, age, gender identity, race, sexual orientation, physical or mental ability, ethnicity, and perspective are proven to be more successful. Weโre focused on creating an environment where everyone can do their best work and thrive. \n\n\nAllTrails participates in the E-Verify program for all remote locations.\nBy submitting my application, I acknowledge and agree to AllTrails' Job Applicant Privacy Notice. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Senior and Engineer jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout HighLevel:\nHighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.\n\n\nOur Website - https://www.gohighlevel.com/\nYouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g\nBlog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/\n\n\nOur Customers:\nHighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.\n\n\nScale at HighLevel:\nWe operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.\n\n\nAbout the Role:\nWe are seeking a talented and motivated data engineer to join our team who will be responsible for designing, developing, and maintaining our data infrastructure and developing backend systems and solutions that support real-time data processing, large-scale event-driven architectures, and integrations with various data systems. This role involves collaborating with cross-functional teams to ensure data reliability, scalability, and performance. The candidate will work closely with data scientists, analysts and software engineers to ensure efficient data flow and storage, enabling data-driven decision-making across the organisation.\n\n\n\nResponsibilities:\n* Software Engineering Excellence: Write clean, efficient, and maintainable code using JavaScript or Python while adhering to best practices and design patterns\n* Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures\n* Data Pipeline Development: Design, develop and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources\n* Data Storage & Warehousing: Build and optimize databases, data lakes and data warehouses (e.g. Snowflake) for high-performance querying\n* Data Integration: Work with APIs, batch and streaming data sources to ingest and transform data\n* Performance Optimization: Optimize queries, indexing and partitioning for efficient data retrieval\n* Collaboration: Work with data analysts, data scientists, software developers and product teams to understand requirements and deliver scalable solutions\n* Monitoring & Debugging: Set up logging, monitoring, and alerting to ensure data pipelines run reliably\n* Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose innovative solutions to address them\n\n\n\nRequirements:\n* 3+ years of experience in software development\n* Education: Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field\n* Strong Problem-Solving Skills: Ability to debug and optimize data processing workflows\n* Programming Fundamentals: Solid understanding of data structures, algorithms, and software design patterns\n* Software Engineering Experience: Demonstrated experience (SDE II/III level) in designing, developing, and delivering software solutions using modern languages and frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java)\n* ETL Tools & Frameworks: Experience with Airflow, dbt, Apache Spark, Kafka, Flink or similar technologies\n* Cloud Platforms: Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3, Glue, Redshift)\n* Databases & Warehousing: Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES)\n* Version Control & CI/CD: Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines for deployment\n* Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment\n* Experience with data visualization tools (e.g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines and devops practices are a plus\n\n\n\n\n\n\nEEO Statement:\nThe company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.\n\n\n#LI-Remote #LI-NJ1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, JavaScript, Cloud, API, Marketing, Sales, Engineer and Backend jobs that are similar:\n\n
$60,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nDelhi
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
About Usย Hawk is the leading provider of AI-supported anti-money laundering and fraud detection technology. Banks and payment providers globally are using Hawkโs powerful combination of traditional rules and explainable AI to improve the effectiveness of their AML compliance and fraud prevention by identifying more crime while maximizing efficiency by reducing false positives. With our solution, we are playing a vital role in the global fight against Money Laundering, Fraud, or the financing of terrorism. We offer a culture of mutual trust, support and passion โ while providing individuals with opportunities to grow professionally and make a difference in the world.ย Your MissionAs a DevOps Engineer, you will play a crucial role in ensuring the scalability, security, and reliability of our AI-driven financial crime prevention platform. You will automate cloud infrastructure, implement monitoring and observability solutions, and build secure, scalable CI/CD pipelines. Your work will directly contribute to maintaining high availability for a platform that fights financial crime 24/7. This role is based on the East Coast, U.S. and requires expertise in cloud infrastructure, automation, security best practices, and continuous integration/deployment (CI/CD).Your Responsibilities* Provision, manage, and scale multi-cloud environments using Infrastructure as Code (IaC) (e.g., Terraform).\n* Maintain high availability (HA), fault tolerance, and least-privilege security practices, while optimizing cloud costs.\n* Design and maintain developer-friendly CI/CD workflows, container templates, and reusable artifacts for seamless software delivery.\n* Implement real-time monitoring, alerting, and observability solutions (e.g., Elastic Stack, Prometheus, Grafana, CloudWatch) to proactively detect and resolve issues.\n* Implement and enforce cloud security best practices, identify and mitigate vulnerabilities, and ensure compliance with data protection regulations.\n* Provide technical guidance to clients running Hawkโs platform in their own VPC environments, supporting onboarding and integration.\n* Develop structured documentation for cloud architectures, best practices, and deployment processes, ensuring seamless team collaboration.\n\nYour Profile* 5+ years of experience in DevOps, Site Reliability Engineering (SRE), or Cloud Engineering roles.\n* Bachelorโs degree in Computer Science, Engineering, or a related field (or equivalent experience).\n* Strong expertise in Kubernetes, containerized applications, and cloud-native technologies.\n* Hands-on experience with AWS or GCP and their core services.\n* Proficiency with Terraform and Infrastructure as Code (IaC) methodologies.\n* Experience with CI/CD tools such as GitLab CI, GitHub Actions, or similar.\n* Strong knowledge of observability and monitoring tools (e.g., Elastic Stack, Prometheus, Grafana, CloudWatch).\n* Solid understanding of cloud security principles, least-privilege access, and automated security policies.\n* Ability to diagnose complex technical challenges and provide scalable, secure solutions.\n* Strong communication and collaboration skills; able to work effectively in a remote, cross-functional environment.\n* Comfortable in a fast-paced, hands-on role, with a willingness to get your hands dirty and embrace feedback for continuous improvement.\n\nPreferred Qualifications* Experience in cybersecurity, penetration testing, and cloud compliance.\n* Familiarity with Java Spring Boot & Apache Kafka.\n* Experience in 24/7 uptime environments with on-call rotations.\n* Knowledge of big data systems (PostgreSQL, S3/Azure Blob Storage, Elasticsearch).\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Java, Cloud and Engineer jobs that are similar:\n\n
$55,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Zeotap\n\n\nFounded in Berlin in 2014, Zeotap started with a mission to provide high-quality data to marketers. As we evolved, we recognized a greater challenge: helping brands create personalized, multi-channel experiences in a world that demands strict data privacy and compliance. This drive led to the launch of Zeotapโs Customer Data Platform (CDP) in 2020โa powerful, AI-native SaaS suite built on Google Cloud that empowers brands to unlock and activate customer data securely.\n\n\nToday, Zeotap is trusted by some of the worldโs most innovative brands, including Virgin Media O2, Amazon, and Audi, to create engaging, data-driven customer experiences that drive better business outcomes across marketing, sales, and service. With an unique background in high-quality data solutions, Zeotap is a leader in the European CDP market, empowering enterprises with a secure, privacy-first solution to harness the full potential of their customer data.\n\n\n\nResponsibilities:\n* You will design and implement robust, scalable, and high-performance data pipelines using Spark, Scala, and Airflow with familiarity on Google Cloud.\n* You develop, construct, test, and maintain architectures such as databases and large-scale processing systems.\n* You will assemble large, complex data sets that meet functional and non-functional business requirements.\n* You build the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from various data sources.\n* You will collaborate with data scientists and other stakeholders to improve data models that drive business processes.\n* You implement data flow and tracking using Apache Airflow.\n* You ensure data quality and integrity across various data processing stages.\n* You monitor and optimize the performance of the data processing pipeline.\n* You will tune and optimize Spark jobs for performance and efficiency, ensuring they run effectively on large-scale data sets.\n* You troubleshoot and resolve issues related to data pipelines and infrastructure.\n* You stay up-to-date with the latest technologies and best practices in Big Data and data engineering.\n* You adhere to Zeotapโs company, privacy and information security policies and procedures\n* You complete all the awareness trainings assigned on time\n\n\n\n\nRequirements:\n* 2+ years of experience in building and deploying high scale solutions\n* Must have very good problem-solving skills and clear fundamentals of DS and algorithms\n* Expert coding skills in Java or Scala\n* Expert coding skills in Go or Python is a huge plus\n* Apache Spark or other Bigdata stack experience is a mandatory\n* High level and low-level design skills.\n* Deep understanding of any OLTP, OLAP, NoSQL or Graph databases is a huge plus.\n* Deep knowledge of distributed systems and design is a huge plus\n* Hands-on with Streaming technologies like Kafka, Flink, Samza etc is a huge plus\n* Good knowledge of scalable technologies involving Bigdata technologies \n* Bachelor or Masterโs degree in information systems, computer science or other related fields is preferred \n\n\n\n\nWhat we offer:\n* Competitive compensation and attractive perks\n* Health Insurance coverage \n* Flexible working support, guidance and training provided by a highly experienced team\n* Fast paced work environment\n* Work with very driven entrepreneurs and a network of global senior investors across telco, data, advertising, and technology\n\n\n\n\n\n\nZeotap welcomes all โ we are equal employment opportunity & affirmative action employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.\n \nInterested in joining us?\n \nWe look forward to hearing from you! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$52,500 — $92,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nBengaluru, Karnataka
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Government Employees Insurance Company is hiring a
Remote Staff Engineer
Our Senior Staff Engineer works with our Staff and Sr. Engineers to innovate and build new systems, improve and enhance existing systems and identify new opportunities to apply your knowledge to solve critical problems. You will lead the strategy and execution of a technical roadmap that will increase the velocity of delivering products and unlock new engineering capabilities. The ideal candidate is a self-starter that has deep technical expertise in their domain. Position Responsibilities As a Senior Staff Engineer, you will: Provide technical leadership to multiple areas and provide technical and thought leadership to the enterprise Collaborate across team members and across the tech organization to solve our toughest problems Develop and execute technical software development strategy for a variety of domains Accountable for the quality, usability, and performance of the solutions Utilize programming languages like C#, Java, Python or other object-oriented languages, SQL, and NoSQL databases, Container Orchestration services including Docker and Kubernetes, and a variety of Azure tools and services Be a role model and mentor, helping to coach and strengthen the technical expertise and know-how of our engineering and product community. Influence and educate executives Consistently share best practices and improve processes within and across teams Analyze cost and forecast, incorporating them into business plans Determine and support resource requirements, evaluate operational processes, measure outcomes to ensure desired results, and demonstrate adaptability and sponsoring continuous learning Qualifications Exemplary ability to design, perform experiments, and influence engineering direction and product roadmap Experience partnering with engineering teams and transferring research to production Extensive experience in leading and building full-stack application and service development, with a strong focus on SAAS products / platforms. Proven expertise in designing and developing microservices using C#, gRPC, Python, Django, Kafka, and Apache Spark, with a deep understanding of both API and event-driven architectures. Proven experience designing and delivering highly-resilient event-driven and messaging based solutions at scale with minimal latency. Deep hands-on experience in building complex SAAS systems in large scale business focused systems, with great knowledge on Docker and Kubernetes Fluency and Specialization with at least two modern OOP languages such as C#, Java, C++, or Python including object-oriented design Great understanding of open-source databases like MySQL, PostgreSQL, etc. And strong foundation with No-SQL databases like Cosmos, Cassandra. Apache Trino etc. In-depth knowledge of CS data structures and algorithms Ability to excel in a fast-paced, startup-like environment Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication) Experience with Micro-services oriented architecture and extensible REST APIs Experience building the architecture and design (architecture, design patterns, reliability, and scaling) of new and current systems Experience in implementing security protocols across services and products: Understanding of Active Directory, Windows Authentication, SAML, OAuth Fluency in DevOps Concepts, Cloud Architecture, and Azure DevOps Operational Framework Experience in leveraging PowerShell scripting Experience in existing Operational Portals such as Azure Portal Experience with application monitoring tools and performance assessments Experience in Azure Network (Subscription, Security zoning, etc.) Experience 10+ years full-stack development experience (C#/Java/Python/GO), with expertise in client-side and server-side frameworks. 8+ years of experience with architecture and design 6+ years of experience in open-source frameworks 4+ years of experience with AWS, GCP, Azure, or another cloud service Education Bachelorโs degree in Computer Science, Information Systems, or equivalent education or work experience Annual Salary $115,000.00 - $260,000.00 The above annual salary range is a general guideline. Multiple factors are taken into consideration to arrive at the final hourly rate/ annual salary to be offered to the selected candidate. Factors include, but are not limited to, the scope and responsibilities of the role, the selected candidateโs work experience, education and training, the work location as well as market and business considerations. At this time, GEICO will not sponsor a new applicant for employment authorization for this position. Benefits: As an Associate, youโll enjoy our Total Rewards Program* to help secure your financial future and preserve your health and well-being, including: Premier Medical, Dental and Vision Insurance with no waiting period** Paid Vacation, Sick and Parental Leave 401(k) Plan Tuition Reimbursement Paid Training and Licensures *Benefits may be different by location. Benefit eligibility requirements vary and may include length of service. **Coverage begins on the date of hire. Must enroll in New Hire Benefits within 30 days of the date of hire for coverage to take effect. The equal employment opportunity policy of the GEICO Companies provides for a fair and equal employment opportunity for all associates and job applicants regardless of race, color, religious creed, national origin, ancestry, age, gender, pregnancy, sexual orientation, gender identity, marital status, familial status, disability or genetic information, in compliance with applicable federal, state and local law. GEICO hires and promotes individuals solely on the basis of their qualifications for the job to be filled. GEICO reasonably accommodates qualified individuals with disabilities to enable them to receive equal employment opportunity and/or perform the essential functions of the job, unless the accommodation would impose an undue hardship to the Company. This applies to all applicants and associates. GEICO also provides a work environment in which each associate is able to be productive and work to the best of their ability. We do not condone or tolerate an atmosphere of intimidation or harassment. We expect and require the cooperation of all associates in maintaining an atmosphere free from discrimination and harassment with mutual respect by and for all associates and applicants. For more than 75 years, GEICO has stood out from the rest of the insurance industry! We are one of the nation's largest and fastest-growing auto insurers thanks to our low rates, outstanding service and clever marketing. We're an industry leader employing thousands of dedicated and hard-working associates. As a wholly owned subsidiary of Berkshire Hathaway, we offer associates training and career advancement in a financially stable and rewarding workplace. Opportunities for Students & Grads Learn more about GEICO Learn more about GEICO Diversity and Inclusion Learn more about GEICO Benefits \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Docker, DevOps, Education, Cloud, API, Senior and Engineer jobs that are similar:\n\n
$47,500 — $97,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMD Chevy Chase (Office) - JPS
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\n\n\n\nWhat you'll be doing:\n* Design, build and maintain ETL/ELT data pipeline that ingests data from multiple sources with different technologies (MySQL, BigQuery, Cloud Storage, CSV, FHIR, JSON, etc.);\n* Architect and maintain Swordโs (complex) data warehouse; We own lots of different types of data;\n* Work with complex data architectures, like Kappa and lambda architectures. Youโll have the opportunity to work with batch and/or streaming processes;\n* Have the opportunity to work with some of the most exciting and newest technologies in the data field;\n* Work closely with our Algorithms and AI teams, supporting their needs for data and pipelines.\n\n\n\nWhat you need to have:\n* Strong experience with Python programming language;\n* Previous experience in Software Engineering is considered;\n* Very high level of SQL and data modelling expertise;\n* Knowledge about other database types, like NoSQL databases is considered;\n* Experience in cloud-based architectures, such as GCP and AWS;\n* Experience with orchestration frameworks like Apache Airflow;\n* Proactivity and capability of proposing improvements to the team, being technical improvements or processual improvements.\n\n\n\nWhat we would love to see:\n* Knowledge of Data Governance;\n* Experience with Modern Data Stack such as DBT, Bigquery;\n* Experience with Kafka Ecosystem. E.g. Kafka, Kafka Connect, Schema Registry and others;\n* Experience with IaC (Terraform), Containerization (Docker, Kubernetes) and CI/CD;\n* Capable of Zooming out and being able to propose changes to data architecture.\n\n\n\nTo ensure you feel good solving a big Human problem, we offer:\n* A stimulating, fast-paced environment with lots of room for creativity;\n* A bright future at a promising high-tech startup company;\n* Career development and growth, with a competitive salary;\n* The opportunity to work with a talented team and to add real value to an innovative solution with the potential to change the future of healthcare;\n* A flexible environment where you can control your hours (remotely) with unlimited vacation; \n* Access to our health and well-being program (digital therapist sessions);\n* Remote or Hybrid work policy (Portugal only);\n* To get to know more about our Tech Stack, check here.\n\n\n\n\n\n\n* Please note that this position does not offer relocation assistance. Candidates must possess a valid EU visa and be based in Portugal. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$55,000 — $95,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nPorto
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nJoin SADA India as a Senior Data Engineer, Enterprise Support service!\n\nYour Mission \n\nAs a Sr. Data Engineer on the Enterprise Support service team at SADA, you will reduce customer anxiety about running production workloads in the cloud by implementing and iteratively improving observability and reliability. You will have the opportunity to engage with our customers in a meaningful way by defining, measuring, and improving key business metrics; eliminating toil through automation; inspecting code, design, implementation, and operational procedures; enabling experimentation by helping create a culture of ownership; and winning customer trust through education, skill sharing, and implementing recommendations. Your efforts will accelerate our customersโ cloud adoption journey and we will be with them through the transformation of their applications, infrastructure, and internal processes. You will be part of a new social contract between customers and service providers that demands shared responsibility and accountability: our partnership with our customers will ensure we are working towards a common goal and share a common fate.\n\nThis is primarily a customer-facing role. You will also work closely with SADAโs Customer Experience team to execute their recommendations to our customers, and with Professional Services on large projects that require PMO support.\n\nPathway to Success \n\n#MakeThemRave is at the foundation of all our engineering. Our motivation is to provide customers with an exceptional experience in migrating, developing, modernizing, and operationalizing their systems in the Google Cloud Platform.\n\nYour success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured bi-yearly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.\n\nAs you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks.\n\nExpectations\n\nCustomer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives.\n\nOnboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals.\n\nJob Requirements\n\nRequired Credentials:\n\n\n* Google Professional Data Engineer Certified or able to complete within the first 45 days of employment \n\n* A secondary Google Cloud certification in any other specialization\n\n\n\n\nRequired Qualifications: \n\n\n* 4+ years of experience in Cloud support\n\n* Experience in supporting customers preferably in 24/7 environments\n\n* Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)\n\n* Experience writing software in at least two or more languages such as Python, Java, Scala, or Go\n\n* Experience in building production-grade data solutions (relational and NoSQL)\n\n* Experience with systems monitoring/alerting, capacity planning, and performance tuning\n\n* Experience with BI tools like Tableau, Looker, etc will be an advantage\n\n* Consultative mindset that delights the customer by building good rapport with them to fully understand their requirements and provide accurate solutions\n\n\n\n\nUseful Qualifications:\n\n\n* \n\n\n* Mastery in at least one of the following domain areas:\n\n\n\n\n\n* \n\n\n* Google Cloud DataFlow: building batch/streaming ETL pipelines with frameworks such as Apache Beam or Google Cloud DataFlow and working with messaging systems like Pub/Sub, Kafka, and RabbitMQ; Auto scaling DataFlow clusters, troubleshooting cluster operation issues\n\n* Data Integration Tools: building data pipelines using modern data integration tools such as Fivetran, Striim, Data Fusion, etc. Must have hands-on experience configuring and integrating with multiple Data Sources within and outside of Google Cloud\n\n* Large Enterprise Migration: migrating entire cloud or on-prem assets to Google Cloud including Data Lakes, Data Warehouses, Databases, Business Intelligence, Jobs, etc. Provide consultations for optimizing cost, defining methodology, and coming up with a plan to execute the migration.\n\n\n\n\n\n\n\n\n\n* Experience with IoT architectures and building real-time data streaming pipelines\n\n* Experience operationalizing machine learning models on large datasets\n\n* Demonstrated leadership and self-direction -- a willingness to teach others and learn new techniques\n\n* Demonstrated skills in selecting the right statistical tools given a data analysis problem\n\n* Understanding of Chaos Engineering\n\n* Understanding of PCI, SOC2, and HIPAA compliance standards\n\n* Understanding of the principle of least privilege and security best practices\n\n* Understanding of cryptocurrency and blockchain technology\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nThiruvananthapuram, Kerala, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Cloudstack Engineer Public Cloud Scalability Team
\nCloudstack Engineer - Public Cloud Scalability Team\n\nOur office is based in Amsterdam, but remote work within the EU is available. We also offer relocation to the Netherlands. \n\nProduct Engineering at Leasewebโฏ \n\nOur team of approximately 90 engineers is working in small scrum teams. Each team has end to end responsibility for a specific product or part of our architecture. We work on a remote-first basis, coming together in person at our Amsterdam headquarters twice a year.\n\nOur organizational structure is flat, placing a high value on independence and entrepreneurship. The atmosphere is informal and relaxed, creating a highly motivating work environment in which you will work with some of the most inspiring colleagues in the industry.\n\nWhat is the role about? \n\nIn this role we are looking for a highly experienced developer with a true DevOps mentality and skillset. You will be collaborating with and contributing to the Apache CloudStack project. Since we are a provider of hosting infrastructure deep knowledge about Linux and Networking is key. Supported by your team, we expect a self-organizing and independent professional that will take the lead in running and scaling our CloudStack deployments. From diving into software bugs and reproducing customer problems to proposing and building sustainable solutions, in order for Leaseweb to provide scalable Public Cloud services.\n\nYou will be working with a team of DevOps engineers. A group with diverse expertise and highly curious minds who are excited about the challenges to build and operate Leasewebโs Public Cloud. Our objective is to build reliable platforms and interfaces providing trustworthy endpoints for users to integrate with, running a standardized stack, easily maintained, and fully autonomous for users through our API and Customer Portal.\n\nKey responsibilities:\n\n\n* Maintaining close collaboration with the Apache CloudStack community on the CloudStack project.\n\n* Developing and supporting the Apache CloudStack project.\n\n* Together with your team you will maintain Leasewebโs CloudStack deployments, both operationally and in software improvements\n\n* Working with the team to resolve issues that customers face with CloudStack, by solving bugs and introducing features\n\n* Participation in the on-duty rotation schedule\n\n\n\n\nRequirements:\n\n\n* Understanding of the Apache CloudStack opensource project.\n\n* Extensive experience with Java development in a cloud hosting context.\n\n* Experience with Python for automation and testing purpose\n\n* Knowledge of the Linux operating systems, preferably Ubuntu. Experience with Shell/Bash is an advantage.\n\n* Excellent knowledge of virtualization technologies (KVM, QEMU and libvirt) is required.\n\n* Knowledge and experience with Networking and Storage use and automation will be a big advantage.\n\n* Love teamwork, good planning skills, logical thinking skills, problem-solving skills and have an eye for detail.\n\n* Experience with continuous integration tools such as Jenkins is a plus.\n\n* Experience with configuration management systems like Chef is a plus.\n\n* It would be an advantage to have experience on Git/ Grafana/ Prometheus/ Kubernetes/ Docker is a plus.\n\n\n\n\nBenefits include \n\n\n* Participation in annual company bonus scheme and company pension\n\n* Internet allowance and travel allowance\n\n* Working from home policy \n\n* Lease bike plan\n\n* 25 days of paid time off (and the option to buy or sell up to 5 more days) \n\n* Free lunch, parking, and fresh fruit provided when in the office \n\n* Attractive relocation packages and an agency that takes care of the entire visa process\n\n* Access to the Leaseweb Academy, a personalized learning platform offering a variety of studies, (Dutch) courses, and trainings \n\n* Fun events year-round โ from virtual pub quizzes to summer parties, company runs, quarterly hackathons and much more \n\n* Monthly after work drinks\n\n* A multicultural work environment (our colleagues are from over 60 countries!) in a company where you can truly make a difference \n\n\n\n\nReady for the next step? \n\nIf youโd like to apply, please do so online. To learn more about us, follow us on LinkedIn or Instagram to get an inside look at life at Leaseweb. For questions, please reach out to Danisha Ardilla Talent Acquisition Specialist, at: [email protected]\n\nWe directly source all candidates โ any unsolicited profiles received from recruitment agencies will be treated as direct applications. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Travel, DevOps, Java, Cloud, API and Engineer jobs that are similar:\n\n
$55,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nAmsterdam, North Holland, Netherlands
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Senior AI Infra Engineer AI ML and Data Infrastructure
The Team\n\nAcross our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central team provides the support needed to push this work forward. \n\nThe Central team at CZI consists of our Finance, People & DEI, Real Estate, Events, Workplace, Facilities, Security, Brand & Communications, Business Systems, Central Operations, Strategic Initiatives, and Ventures teams. These teams provide strategic support and operational excellence across the board at CZI.\nThe Opportunity\n\nBy pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive solutions. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. Our technology team is already helping schools bring personalized learning tools to teachers and schools across the country. We are also supporting scientists around the world as they develop a comprehensive reference atlas of all cells in the human body, and are developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. \n\nThe AI/ML and Data Engineering Infrastructure organization works on building shared tools and platforms to be used across all of the Chan Zuckerberg Initiative, partnering and supporting the work of a wide range of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale. A person in this role will build these technology solutions and help to cultivate a culture of shared best practices and knowledge around core engineering.\nWhat You'll Do\n\n\n* Participate in the technical design and building of efficient, stable, performant, scalable and secure AI/ML and Data infrastructure engineering solutions.\n\n* Active hands-on coding working on our Deep Learning and Machine Learning models\n\n* Design and implement complex systems integrating with our large scale AI/ML GPU compute infrastructure and platform, making working across multiple clouds easier and convenient for our Research Engineers, ML Engineers, and Data Scientists. \n\n* Use your solid experience and skills in building containerized applications and infrastructure using Kubernetes in support of our large scale GPU Research cluster as well as working on our various heterogeneous and distributed AI/ML environments. \n\n* Collaborate with other team members in the design and build of our Cloud based AI/ML platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes.\n\n* Collaborate with our partners on data management solutions in our heterogeneous collection of complex datasets.\n\n* Help build tooling that makes optimal use of our shared infrastructure in empowering our AI/ML efforts with world class GPU Compute Cluster and other compute environments such as our AWS based services.\n\n\n\nWhat You'll Bring\n\n\n* BS or MS degree in Computer Science or a related technical discipline or equivalent experience\n\n* 5+ years of relevant coding experience\n\n* 3+ years of systems Architecture and Design experience, with a broad range of experience across Data, AI/ML, Core Infrastructure, and Security Engineering\n\n* Scaling containerized applications on Kubernetes or Mesos, including expertise with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes or Mesos. (Kubernetes preferred)\n\n* Proficiency with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, and experience with On-Prem and Colocation Service hosting environments\n\n* Proven coding ability with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala\n\n* Shown ability with a scripting language such as Python, PHP, or Ruby\n\n* AI/ML Platform Operations experience in an environment with challenging data and systems platform challenges - including large scale Kafka and Spark deployments (or their coralaries such as Pulsar, Flink, and/or Ray) as well as Workflow scheduling tools such as Apache Airflow, Dagster, or Apache Beam \n\n* MLOps experience working with medium to large scale GPU clusters in Kubernetes (Kubeflow), HPC environments, or large scale Cloud based ML deployments\n\n* Working knowledge of Nvidia CUDA and AI/ML custom libraries. \n\n* Knowledge of Linux systems optimization and administration\n\n* Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.\n\n* PyTorch, Karas, or Tensorflow experience a strong nice to have\n\n* HPC with and Slurm experience a strong nice to have\n\n\n\nCompensation\n\nThe Redwood City, CA base pay range for this role is $190,000 - $285,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside Redwood City are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.\nBenefits for the Whole You \n\nWeโre thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. \n\n\n* CZI provides a generous employer match on employee 401(k) contributions to support planning for the future.\n\n* Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs.\n\n* CZI Life of Service Gifts are awarded to employees to โlive the missionโ and support the causes closest to them.\n\n* Paid time off to volunteer at an organization of your choice. \n\n* Funding for select family-forming benefits. \n\n* Relocation support for employees who need assistance moving to the Bay Area\n\n* And more!\n\n\n\nCommitment to Diversity\n\nWe believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. Learn about our diversity, equity, and inclusion efforts. \n\nIf youโre interested in a role but your previous experience doesnโt perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.\n\nExplore our work modes, benefits, and interview process at www.chanzuckerberg.com/careers.\n\n#LI-Remote \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$42,500 — $82,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRedwood City, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Sayari: \nSayari is the counterparty and supply chain risk intelligence provider trusted by government agencies, multinational corporations, and financial institutions. Its intuitive network analysis platform surfaces hidden risk through integrated corporate ownership, supply chain, trade transaction and risk intelligence data from over 250 jurisdictions. Sayari is headquartered in Washington, D.C., and its solutions are used by thousands of frontline analysts in over 35 countries.\n\n\nOur company culture is defined by a dedication to our mission of using open data to enhance visibility into global commercial and financial networks, a passion for finding novel approaches to complex problems, and an understanding that diverse perspectives create optimal outcomes. We embrace cross-team collaboration, encourage training and learning opportunities, and reward initiative and innovation. If you like working with supportive, high-performing, and curious teams, Sayari is the place for you.\n\n\nJob Description:\nSayariโs flagship product, Sayari Graph, provides instant access to structured business information from billions of corporate, legal, and trade records. As a member of Sayari's data team you will work with the Product and Software Engineering teams to collect data from around the globe, maintain existing data pipelines, and develop new pipelines that power Sayari Graph. \n\n\n\nJob Responsibilities:\n* Write and deploy crawling scripts to collect source data from the web\n* Write and run data transformers in Scala Spark to standardize bulk data sets\n* Write and run modules in Python to parse entity references and relationships from source data\n* Diagnose and fix bugs reported by internal and external users\n* Analyze and report on internal datasets to answer questions and inform feature work\n* Work collaboratively on and across a team of engineers using agile principles\n* Give and receive feedback through code reviews \n\n\n\nSkills & Experience:\n* Professional experience with Python and a JVM language (e.g., Scala)\n* 2+ years of experience designing and maintaining data pipelines\n* Experience using Apache Spark and Apache Airflow\n* Experience with SQL and NoSQL databases (e.g., columns stores, graph, etc.)\n* Experience working on a cloud platform like GCP, AWS, or Azure\n* Experience working collaboratively with Git\n* Understanding of Docker/Kubernetes\n* Interest in learning from and mentoring team members\n* Experience supporting and working with cross-functional teams in a dynamic environment\n* Passionate about open source development and innovative technology\n* Experience working with BI tools like BigQuery and Superset is a plus\n* Understanding of knowledge graphs is a plus\n\n\n\n\n$100,000 - $125,000 a yearThe target base salary for this position is $100,000 - $125,000 USD plus bonus. Final offer amounts are determined by multiple factors including location, local market variances, candidate experience and expertise, internal peer equity, and may vary from the amounts listed above.\n\nBenefits: \nยท Limitless growth and learning opportunities\nยท A collaborative and positive culture - your team will be as smart and driven as you\nยท A strong commitment to diversity, equity & inclusion\nยท Exceedingly generous vacation leave, parental leave, floating holidays, flexible schedule, & other remarkable benefits\nยท Outstanding competitive compensation & commission package\nยท Comprehensive family-friendly health benefits, including full healthcare coverage plans, commuter benefits, & 401K matching\n \nSayari is an equal opportunity employer and strongly encourages diverse candidates to apply. We believe diversity and inclusion mean our team members should reflect the diversity of the United States. No employee or applicant will face discrimination or harassment based on race, color, ethnicity, religion, age, gender, gender identity or expression, sexual orientation, disability status, veteran status, genetics, or political affiliation. We strongly encourage applicants of all backgrounds to apply. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Cloud and Engineer jobs that are similar:\n\n
$50,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout the company\n\n\nZeotap is the next-generation Customer Data Platform. It empowers brands to unify, enhance and activate customer data in a cookieless future, all while putting consumer privacy and compliance front-and-centre. Recognised by Gartner as a โCool Vendorโ, Zeotap works with over 80 of the worldโs top 100 brands, including P&G, Nestlรฉ and Virgin Media. It is also the founding member of ID+, a universal marketing ID initiative.\n\n\nZeotap is expanding its SAAS product suites branded as Customer Intelligence Platform consisting of an integrated product suite for Customer data collection, ID resolution, Predictive Analytics, Audience management and Activation. \nOur ideal candidate will be passionate about helping our enterprise customers and business teams across the globe, enjoy resolving problems, identifying root causes of issues, and thrive in a team environment. The incoming person would be responsible for end-to-end ticket resolution and communication with the customers and other stakeholders within the SLA, escalations to the internal teams based on SOPs, creation of internal knowledge based articles and perform engineering support activities as and when required.\n\n\n\nResponsibilities:\n* You will design and implement robust, scalable, and high-performance data pipelines using Spark, Scala, and Airflow with familiarity on Google Cloud.\n* You develop, construct, test, and maintain architectures such as databases and large-scale processing systems.\n* You will assemble large, complex data sets that meet functional and non-functional business requirements.\n* You build the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from various data sources.\n* You will collaborate with data scientists and other stakeholders to improve data models that drive business processes.\n* You implement data flow and tracking using Apache Airflow.\n* You ensure data quality and integrity across various data processing stages.\n* You monitor and optimize the performance of the data processing pipeline.\n* You will tune and optimize Spark jobs for performance and efficiency, ensuring they run effectively on large-scale data sets.\n* You troubleshoot and resolve issues related to data pipelines and infrastructure.\n* You stay up-to-date with the latest technologies and best practices in Big Data and data engineering.\n* You adhere to Zeotapโs company, privacy and information security policies and procedures\n* You complete all the awareness trainings assigned on time\n\n\n\n\nRequirements:\n* 1+ years of experience in building and deploying high scale solutions\n* Must have very good problem-solving skills and clear fundamentals of DS and algorithms\n* Expert coding skills in Java or Scala\n* Expert coding skills in Go or Python is a huge plus\n* Apache Spark or other Bigdata stack experience is a mandatory\n* High level and low-level design skills.\n* Deep understanding of any OLTP, OLAP, NoSQL or Graph databases is a huge plus.\n* Deep knowledge of distributed systems and design is a huge plus\n* Hands-on with Streaming technologies like Kafka, Flink, Samza etc is a huge plus\n* Good knowledge of scalable technologies involving Bigdata technologies \n* Bachelor or Masterโs degree in information systems, computer science or other related fields is preferred \n\n\n\n\nWhat we offer:\n* Competitive compensation and attractive perks\n* Health Insurance coverage \n* Flexible working support, guidance and training provided by a highly experienced team\n* Fast paced work environment\n* Work with very driven entrepreneurs and a network of global senior investors across telco, data, advertising, and technology\n\n\n\n\n\nZeotap welcomes all โ we are equal employment opportunity & affirmative action employers. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.\n \nInterested in joining us?\n \nWe look forward to hearing from you! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Java, NoSQL, Senior, Marketing and Engineer jobs that are similar:\n\n
$60,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nBengaluru, Karnataka
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote - Astronomer designed Astro, a modern data orchestration platform, powered by Apache Airflowโข. Astro enables companies to place Apache Airflow at the core of their data operations, providing ease of use, scalability, and enterprise-grade security, to ensure the reliable... \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Golang, Senior and Engineer jobs that are similar:\n\n
$65,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nThe Mortgage Engineering team is seeking a highly skilled and experienced Senior Backend Engineer with a strong focus on microservices architecture to join our team. The ideal candidate will be proficient in Java, and possess in-depth knowledge of Kafka, SQS, Redis, Postgres, Grafana, and Kubernetes. You are an expert in working with and scaling event-driven systems, webhooks, RESTful APIs and solving challenges with concurrency and distributed systems. As a Senior Backend Engineer at Ocrolus, you will be responsible for designing, developing, and maintaining highly scalable and reliable backend systems. You will work closely with product managers, designers, and other engineers to ensure our services meet the highest standards of performance and reliability, specifically tailored to the needs of the mortgage industry.\n\nKey Responsibilities:\n\n\n* Design, develop, and maintain backend services and microservices architecture using Java.\n\n* Implement event-driven systems utilizing Kafka and AWS SQS for real-time data processing and messaging.\n\n* Optimize and manage in-memory data stores with Redis for high-speed caching and data retrieval.\n\n* Develop and maintain robust database solutions with Postgres, ensuring data integrity and performance with PgAnalyze.\n\n* Deploy, monitor, and manage containerized applications using Kubernetes and Terraform and ensure its scalability and resilience and our manage cloud infrastructure.\n\n* Collaborate closely with product managers and designers to understand requirements and deliver technical solutions that meet business needs.\n\n* Develop and maintain RESTful APIs and gRPC services to support seamless integration with frontend applications and third-party services.\n\n* Ensure secure and efficient authentication and authorization processes using OAuth.\n\n* Manage codebases in a monorepo environment using Bazel for build automation.\n\n* Troubleshoot and resolve client support issues in a timely manner, ensuring minimal disruption to service.\n\n* Continuously explore and implement new technologies and frameworks to improve system performance and efficiency.\n\n* Write and maintain technical documentation on Confluence to document technical plans and processes, and facilitate knowledge sharing across the team.\n\n* Mentor junior engineers and contribute to the overall growth and development of the engineering team.\n\n\n\n\nRequired Qualifications:\n\n\n* Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field.\n\n* 5+ years of professional experience in backend development with a focus on microservices.\n\n* Proficiency in Java, with a strong preference for expertise in Java and the Spring framework.\n\n* Strong experience with Apache Kafka for building event-driven architectures.\n\n* Hands-on experience with AWS SQS for message queuing and processing.\n\n* Expertise in Redis for caching and in-memory data management.\n\n* Solid understanding of Postgres or other relational databases, including performance tuning, migrations, and optimization.\n\n* Proven experience with Kubernetes for container orchestration and management.\n\n* Proficiency in developing and consuming RESTful APIs and gRPC services.\n\n* Proficiency with command line and Git for version control and Github for code reviews.\n\n* Familiarity with OAuth for secure authentication and authorization.\n\n* Strong understanding of software development best practices, including version control, testing, and CI/CD automation.\n\n* Excellent problem-solving skills and the ability to work independently and as part of a team.\n\n* Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Experience working in the mortgage and fintech industries, with a deep understanding of domain-specific challenges and B2B SaSS requirements.\n\n* Experience managing codebases in a monorepo environment with Bazel for build automation.\n\n* Understanding of security best practices and implementation in microservices.\n\n* Experience with performance monitoring and logging tools such as Grafana, Sentry, PgAnalyze, Prometheus, and New Relic.\n\n* Familiarity with cloud platforms such as AWS.\n\n* Familiarity with Python.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Redis, Java, Cloud, Git, Senior, Junior, Engineer and Backend jobs that are similar:\n\n
$65,000 — $115,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nGurgaon, Haryana, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout the Role\n\nThe Solutions Engineer plays a pivotal role in DriveWealth's growth and expansion by leading the API integration of new partners, educating on new products, and identifying processes that can be automated to optimize team and partner efficiency. \n\nWhat Youโll Do\n\n\n* Program Management: Identify and implement process improvements that maximize time efficiency for DriveWealth employees and partners. Use your strong programming knowledge to quickly implement new automation in ServiceNow, Github, and other ancillary programs\n\n* Process Optimization: Define and document repeatable processes, ensuring operational efficiency and scalability while maintaining a keen eye for detail and quality assurance\n\n* Cross-functional Collaboration: Collaborate closely with internal teams, including sales, finance, product, technology, and operations, to develop cohesive strategies that prioritize partner success and drive revenue growth\n\n* Educate and Lead Partners: Align closely with partners who will be integrating DriveWealths API. Use your robust programming knowledge to provide education, project management, and architecture support while maintaining positive partner relationships\n\n* Accountability and Advocacy: Establish a culture of mutual accountability between the Relationship Management team and partners. Advocate for their needs and marshal resources to support them while aligning with DriveWealth's overarching vision\n\n* Identify Product Bugs: Use your debugging skills to identify pain points and reported bugs by partners while suggesting and introducing changes to our product team\n\n\n\n\nWhat Youโll Need\n\n\n* Five years of experience in software development, with a focus on the implementation of APIs\n\n* Proficiency in full stack development, with working knowledge of cloud-based platform tools such as AWS and/or Google Cloud Platform\n\n* Proficiency in implementing event-based programs, working with tools such as Apache Kafka and AWS SQS \n\n* Exceptional written and verbal communication skills\n\n* Strong time management abilities with the capacity to prioritize and manage multiple partner engagements concurrently\n\n* Proven problem-solving skills and meticulous attention to detail\n\n* Ability to collaborate effectively with cross-functional teams and external partners\n\n* Customer-centric mindset with a commitment to ensuring client success and satisfaction\n\n* Proactive, self-motivated, and capable of working independently while taking ownership of client relationships\n\n* You thrive in a fast-paced environment and can deliver results under pressure\n\n* You are driven by a passion for partner success and business growth\n\n\n\n\nApplicants must be authorized to work for any employer in the U.S. DriveWealth is unable to sponsor or take over sponsorship of an employment Visa at this time. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, API and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nNew York City, New York, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nGRAIL is a healthcare company whose mission is to detect cancer early, when it can be cured. GRAIL is focused on alleviating the global burden of cancer by developing pioneering technology to detect and identify multiple deadly cancer types early. The company is using the power of next-generation sequencing, population-scale clinical studies, and state-of-the-art computer science and data science to enhance the scientific understanding of cancer biology, and to develop its multi-cancer early detection blood test. GRAIL is headquartered in Menlo Park, CA with locations in Washington, D.C., North Carolina, and the United Kingdom. GRAIL, LLC is a wholly-owned subsidiary of Illumina, Inc. (NASDAQ:ILMN). For more information, please visit www.grail.com.\n\n\nAre you a champion of automation interested in using your talents for optimizing processes that will make an impact on the fight against cancer? If so, join GRAIL on the Data Integration team in Research! \n\n\nGRAIL is seeking a Staff Data Engineer to join our team to support the growing data needs of GRAILโs clinical and research activities. You will leverage your expertise in automation and data engineering to ensure our scientific teams have the data they need to succeed. This role is pivotal in advancing GRAIL's mission by enhancing our data infrastructure and contributing to our early cancer detection efforts.\n\n\n\nRESPONSIBILITIES\n* Be a part of a highly collaborative team that focuses on delivering value to cross-functional partners by designing, deploying, and automating secure, efficient, and scalable data infrastructure and tools, reducing manual efforts and streamlining operations.\n* Help model Grail data and ensure that it follows FAIR principles (findable, accessible, interoperable and reusable).\n* Drive the design, deployment, and automated delivery of data infrastructure, standardized data models, datasets, and tools.\n* Integrate automated testing and release processes to improve the quality and velocity of software and data deliveries.\n* Collaborate with cross-functional teams, from Research to Clinical Lab Operations to Software Engineering to provide comprehensive data solutions from conception to delivery. \n* Ensure all software and data meet high standards for quality, clinical compliance, and privacy.\n* Mentor fellow engineers and scientists, promoting best practices in software and data engineering.\n\n\n\nPREFERRED EXPERIENCE\n* B.S. / M.S. in a quantitative field (e.g., Computer Science, Engineering, Mathematics, Physics, Computational Biology) with at least 8 years of related industry experience, or Ph.D. with at least 5 years of related industry experience.\n* Extensive experience with relational databases, data modeling principles, data pipeline tools and workflow engines (e.g., SQL, DBT, Apache Airflow, AWS GLUE, Spark.\n* Extensive experience with DevOps practices, including CI/CD pipelines, containerized deployment (e.g., Kubernetes), and infrastructure-as-code (e.g., Terraform).\n* Experience with supporting data science / machine learning data pipelines, preferably in the context of analysis of biological data.\n* Experience in developing data pipelines using scalable cloud-based data warehouses / data lakes on AWS, Azure, or GCP.\n* Solid programming skills in object-oriented and/or functional programming paradigms. \n* Ability to embrace uncertainty, navigate ambiguity, and collaborate with product teams and stakeholders to refine requirements and drive towards clear engineering objectives and designs.\n* A commitment to constructive dialogue, both in giving and receiving critical feedback, to foster an environment of continuous improvement.\n\n\n\nHIGHLY WELCOME EXPERIENCE\n* Prior industry experience in the healthcare, biotech, or life sciences industry, especially in the context of next-generation sequencing.\n* Experience working in a regulated environment (e.g., FDA, CLIA, GDPR).\n* Proficiency in Python, and R.\n* Experience building microservices and web applications.\n\n\n\n\n\nThe estimated, full-time, annual base pay scale for this position is $180,000 - $202,000. Actual base pay will consider skills, experience, and location. \n\n\nBased on the role, colleagues may be eligible to participate in an annual bonus plan tied to company and individual performance, or an incentive plan. We also offer a long-term incentive plan to align company and colleague success over time.\n\n\nIn addition, GRAIL offers a progressive benefit package, including flexible time-off, a 401k with a company match, and alongside our medical, dental, vision plans, carefully selected mindfulness offerings.\n\n\nGRAIL is an Equal Employment Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status. We will reasonably accommodate all individuals with disabilities so that they can participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. GRAIL maintains a drug-free workplace. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps and Engineer jobs that are similar:\n\n
$60,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Diego, CA
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout the Team: \n\nThe Applications engineering team is in charge of Applications such as ThirdEye and Data Manager (to name a few) that allow customers to leverage fast, fresh and actionable insights from Apache Pinot. StarTree ThirdEye is an anomaly detection platform that allows organizations to monitor critical Business KPIs in a real-time fashion, and detect any anomalies automatically. StarTree Data Manager enables StarTree users to go from registration to first query within a matter of minutes.There are more such applications currently in the works!\n\nThe charter for the team includes all phases of the application lifecycle from ideation to delivery. The team is in charge of not just frontend and backend but also monetization and growth charter for every application. Additionally, the Applications engineering team is also building an application infrastructure platform that StarTree customers can leverage to build applications themselves. \n\nAbout the Role:\n\n\nDesign and implement data intensive, stateful cloud offerings that are fault tolerant and scale gracefully under load.\n\nEnsure service SLAs are met even in the presence of catastrophic failures\n\nEvaluate and analyze requirements from product owners and coordinate with stakeholders to effectively drive execution from ideation to delivery.\n\nLeverage various monitoring and alerting services to resolve escalations and urgent issues in deployed services.\n\nInteract with the internal/external customers to identify and influence product roadmap.\n\n\n\n\nWhat weโre looking for:\n\n\nExperience with large-scale systems: They should have experience building, deploying, and maintaining large-scale systems that can handle high volumes of data and traffic.\n\nStrong communication skills: They should have strong communication skills and be able to collaborate effectively with cross-functional teams, including product managers, front-end engineers, and data scientists.\n\nProficiency in Programming languages such as Java, Go, Python\n\nRelational database concepts, familiarity with SQL\n\n\n\n\nThe base salary range for this US full-time position is $150,000 - $220,000, subject to standard withholding and applicable taxes. Additionally, new hires receive competitive and compelling equity grants, and access to a comprehensive benefits offering. The base salary range reflects the minimum and maximum target for candidates. The Salary and Equity compensation offered may vary depending on factors including: location, skills, experience, and the assessment process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer and Backend jobs that are similar:\n\n
$65,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMountain View, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Senior Machine Learning Engineer Data Solutions
\nApply now for a career that puts wellbeing first!\n\nGET TO KNOW US\n\nWellhub (formerly Gympass*) is a corporate wellness platform that connects employees to the best partners for fitness, mindfulness, therapy, nutrition, and sleep, all included in one subscription designed to cost less than each individual partner. Founded in 2012 and headquartered in NYC, we have a growing global team in 11 countries. At Wellhub, you have the opportunity to build a career in a high-growth tech company that places wellbeing at the foundation of its culture, and contribute to making every company a wellness company.\n\n*Big news: Gympass is now Wellhub!\nWe are thrilled to announce our rebranding as Wellhub, marking a significant milestone in our journey. This transformation reflects our evolution from a โpass for gymsโ to a comprehensive employee wellbeing solution. With our refreshed identity, we are poised to embark on an exciting new chapter of growth and expansion. We are elevating our offerings, including a completely new app experience and an expanded network of wellbeing partners. Learn more about it here.\n\nTHE OPPORTUNITY\n\nWe are hiring a Senior Machine Learning Engineer to our Data Solutions team in Brazil! \n\nThe Data department at Wellhub contributes to our data democratization mission. We are responsible for empowering every team by providing a scalable and reliable Data and ML platform to provide an efficient journey for every Data Practitioner and help elevate business outcomes from our data.\n\nWe tackle large-scale production challenges using software engineering principles, leveraging cutting-edge technologies such as Kubernetes, Trino, Spark, Kafka, Airflow, Flink, MLFlow, and more. Our infrastructure is entirely cloud-based, offering a dynamic and innovative environment for data-driven solutions.\nYOUR IMPACT\n\n\n* Develop and maintain a scalable platform to streamline the development, deployment and management of machine/deep learning models;\n\n* Design and implement data architectures and pipelines to solve complex business challenges;\n\n* Ensure engineering best practices to create scalable and reliable data solutions\n\n* Live the mission: inspire and empower others by genuinely caring for your own wellbeing and your colleagues. Bring wellbeing to the forefront of work, and create a supportive environment where everyone feels comfortable taking care of themselves, taking time off, and finding work-life balance.\n\n\n\n\nWHO YOU ARE\n\n\nYou have created robust and scalable Data and/or Machine Learning architectures and frameworks;\n\nProficient in at least one programming language (e.g. Java, Python, Scala).\n\nYou can collaborate with different teams to understand needs and develop effective Data solutions;\n\nYou are motivated to design, implement, and maintain software solutions within the Data realm;\n\nDemonstrated experience in building and maintaining MLOps tools and pipelines for model development, deployment, and automation;\n\nUnderstanding of Machine and Deep Learning principles and frameworks (e.g. TensorFlow, PyTorch, sci-kit learn);\n\nUnderstanding of data engineering and architecture principles and technologies (e.g. Apache Spark, Hadoop, Apache Flink);\n\nYou need to be able to articulate ideas clearly when speaking to groups in English.\n\n\n\n\nThe following will be considered a plus:\n\n\nHands-on experience with cloud platforms, particularly AWS, including services like SageMaker, EC2, and S3;\n\nFamiliarity with containerization technologies such as Docker/Kubernetes/Crossplane;\n\nAbility to deploy and manage machine learning models within Kubernetes clusters;\n\nExperience with DevOps practices, including CI/CD pipelines and infrastructure as code (IaC) and logging tools such as Prometheus, Grafana, or AWS CloudWatch.\n\n\n\n\nWe recognize that individuals approach job applications differently. We strongly encourage all aspiring applicants to go for it, even if they don't match the job description 100%. We welcome your application and will be delighted to explore if you could be a great fit for our team. For this specific role, please note that experience working with Java, or Python, and understanding of Machine and Deep Learning principles and frameworks, and also an advanced level of English are mandatory requirements.\n\nWHAT WE OFFER YOU \n\nWe're a wellness company that is committed to the health and well-being of our employees. Our flexible program allows you to customize your benefits, according to your needs!\n\nOur benefits include:\n\nWELLNESS: Health, dental, and life insurance.\n\nFLEXIBLE WORK: Choose when and where you work. For most, this will be a hybrid office/remote structure but can vary depending on the needs of the role and employee preferences. We offer all employees a home office stipend and a monthly flexible work allowance to help cover the costs of working from home.\n\nFLEXIBLE SCHEDULE: Wellhubbers and their leaders can make the best decisions for their scope. This includes flexibility to adjust their working hours based on their personal schedule, time zone, and business needs\n\nWELLHUB: We believe in our mission and encourage our employees and their families to take care of their wellbeing too. Access onsite gyms and fitness studios, digital fitness programs, and online wellness resources for meditation, nutrition, mental health support, and more. You will receive the Gold plan at no cost, and other premium plans will be significantly discounted.\n\nPAID TIME OFF: We know how important it is that our employees take time away from work to recharge. \n\nVacations after 6 months and 3 days off per year + 1 day off for each year of tenure (up to 5 additional days) + extra day off for your birthday.\n\nPAID PARENTAL LEAVE: Welcoming a new child is one of the most special moments in your life and we want our employees to take the time to be present and enjoy their growing family. We offer 100% paid parental leave to all new parents and extended maternity leave\n\nCAREER GROWTH: Outstanding opportunities for personal and career growth. That means we maintain a growth mindset in everything we do and invest deeply in employee development.\n\nCULTURE: An exciting and supportive atmosphere with ambitious people from around the world! Youโll partner with global colleagues and share in the success of a high-growth technology company disrupting the health and wellness space. Our value-based culture of trust, flexibility, and integrity makes this possible every day. Find more info on our careers page!\n\nAnd to get a glimpse of Life at Wellhubโฆ Follow us on Instagram @wellhublife and LinkedIn!\nDiversity, Equity, and Belonging at Wellhub\n\nWe aim to create a collaborative, supportive, and inclusive space where everyone knows they belong.\n\nWellhub is committed to creating a diverse work environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, sex, gender identity or expression, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status, or any other basis covered by appropriate law.\n\nQuestions on how we treat your personal data? See our Job Applicant Privacy Notice.\n\n\n#LI-REMOTE\n\n \n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSรฃo Paulo, Sรฃo Paulo, Brazil
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAxoni is building the next generation of capital markets technology. Our solutions are used by the worldโs leading banks, asset managers, hedge funds, and infrastructure providers. Our diverse team focuses every day on our goal of building products that will change and improve how our clients and the markets will interact. \n\n\nWe are seeking talented, motivated professionals that want to be part of this once-in-a-career opportunity to not only see, but also drive the incredible changes coming to global capital markets. We are building a culture where our team feels valued and everyone is given an opportunity to grow and succeed. We try to live by our Core Values and demonstrate what we believe represent the kind of company we are working to build. These Values are: Delivery is everything; Choose Kindness; Be better every day.\n\n\nAxoni is looking for Java Software Engineers who will be responsible for software development for our biggest client initiative. Our projects vary across multiple industries including Bond Issuance, Securities Lending, and Equity Swaps to deliver a seamless, optimized experience all the way to the end user. You will work directly with our clients to understand and solve their largest pain points. \n\n\n*Selected Hiring Hubs Include: New York, New Jersey, Pennsylvania, Connecticut, DC Area, North Carolina, Florida, Texas, and England* \n\n\n\nYou will: \n* Use Java to develop cloud-hosted, API-first, microservices and applications\n* Handle end-to-end development, including coding, testing, debugging and reviewing code\n* Interact with users and development teams to gather and define requirements and analyze user stories for validity and feasibility\n* Work within the team on iterative development that delivers high-quality, stable services\n* Engineer effective, defect-free configurations and code that meets business requirements and team standards\n* Interact with messaging systems like Apache Kafka, MQ, etc.\n* Work in a scrum team and follow Agile and Test Driven Development best practices\n* Work with containerization/orchestration tools such as Docker or Kubernetes\n\n\n\nQualifications: \n* 5+ years of professional software development experience using Java\n* Experience designing distributed enterprise software\n* Experience working with DevOps tools such as Kubernetes/Helm, Terraform, Docker, ect.\n* Experience deploying and supporting production workloads\n* Experience building REST services and/or microservices\n* Strong database experience, preferably with PostgreSQL, MySQL, Oracle, or DB2\n* Familiarity with tools and frameworks in the Java ecosystems (Spring, Spring Boot, Vert.x etc.)\n* Experience with AWS infrastructure\n* Experience writing concurrent and multi-threaded java applications\n\n\n\n\nBonus Points: \n* Experience with SaaS \n* Capital markets and fintech experience\n\n\n\n\n\nIndividuals seeking employment at Axoni are considered without regards to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation. You are being given the opportunity to provide the following information in order to help us comply with federal and state Equal Employment Opportunity/Affirmative Action record keeping, reporting, and other legal requirements. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, DevOps, Java, Legal and Engineer jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nEngland
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.