About Usย Hawk is the leading provider of AI-supported anti-money laundering and fraud detection technology. Banks and payment providers globally are using Hawkโs powerful combination of traditional rules and explainable AI to improve the effectiveness of their AML compliance and fraud prevention by identifying more crime while maximizing efficiency by reducing false positives. With our solution, we are playing a vital role in the global fight against Money Laundering, Fraud, or the financing of terrorism. We offer a culture of mutual trust, support and passion โ while providing individuals with opportunities to grow professionally and make a difference in the world.ย Your MissionAs a DevOps Engineer, you will play a crucial role in ensuring the scalability, security, and reliability of our AI-driven financial crime prevention platform. You will automate cloud infrastructure, implement monitoring and observability solutions, and build secure, scalable CI/CD pipelines. Your work will directly contribute to maintaining high availability for a platform that fights financial crime 24/7. This role is based on the East Coast, U.S. and requires expertise in cloud infrastructure, automation, security best practices, and continuous integration/deployment (CI/CD).Your Responsibilities* Provision, manage, and scale multi-cloud environments using Infrastructure as Code (IaC) (e.g., Terraform).\n* Maintain high availability (HA), fault tolerance, and least-privilege security practices, while optimizing cloud costs.\n* Design and maintain developer-friendly CI/CD workflows, container templates, and reusable artifacts for seamless software delivery.\n* Implement real-time monitoring, alerting, and observability solutions (e.g., Elastic Stack, Prometheus, Grafana, CloudWatch) to proactively detect and resolve issues.\n* Implement and enforce cloud security best practices, identify and mitigate vulnerabilities, and ensure compliance with data protection regulations.\n* Provide technical guidance to clients running Hawkโs platform in their own VPC environments, supporting onboarding and integration.\n* Develop structured documentation for cloud architectures, best practices, and deployment processes, ensuring seamless team collaboration.\n\nYour Profile* 5+ years of experience in DevOps, Site Reliability Engineering (SRE), or Cloud Engineering roles.\n* Bachelorโs degree in Computer Science, Engineering, or a related field (or equivalent experience).\n* Strong expertise in Kubernetes, containerized applications, and cloud-native technologies.\n* Hands-on experience with AWS or GCP and their core services.\n* Proficiency with Terraform and Infrastructure as Code (IaC) methodologies.\n* Experience with CI/CD tools such as GitLab CI, GitHub Actions, or similar.\n* Strong knowledge of observability and monitoring tools (e.g., Elastic Stack, Prometheus, Grafana, CloudWatch).\n* Solid understanding of cloud security principles, least-privilege access, and automated security policies.\n* Ability to diagnose complex technical challenges and provide scalable, secure solutions.\n* Strong communication and collaboration skills; able to work effectively in a remote, cross-functional environment.\n* Comfortable in a fast-paced, hands-on role, with a willingness to get your hands dirty and embrace feedback for continuous improvement.\n\nPreferred Qualifications* Experience in cybersecurity, penetration testing, and cloud compliance.\n* Familiarity with Java Spring Boot & Apache Kafka.\n* Experience in 24/7 uptime environments with on-call rotations.\n* Knowledge of big data systems (PostgreSQL, S3/Azure Blob Storage, Elasticsearch).\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Java, Cloud and Engineer jobs that are similar:\n\n
$55,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout You\n\n\nWe are seeking a skilled Analytics Engineer to join our dynamic Data Team. The ideal candidate will have a comprehensive understanding of the data lifecycle from ingestion to consumption, with a particular focus on data modeling. This role will support various business domains, predominantly Finance, by organizing and structuring data to support robust analytics and reporting.\n\n\nThis role will be part of a highly collaborative team made up of US and Brazil-based Teachable and Hotmart employees.\n\n\nWhat Youโll Do\n\n\n* Data Ingestion to Consumption: Manage the flow of data from ingestion to final consumption. Organize data, understand modern data structures and file types, and ensure proper storage in data lakes and data warehouses.\n\n* Data Modeling: Develop and maintain entity-relationship models. Relate business and calculation rules to data models to ensure data integrity and relevance.\n\n* Pipeline Implementation: Design and implement data pipelines using preferrable SQL or Python to ensure efficient data processing and transformation.\n\n* Reporting Support: Collaborate with business analysts and other stakeholders to understand reporting needs and ensure that data structures support these requirements.\n\n* Documentation: Maintain thorough documentation of data models, data flows, and data transformation processes.\n\n* Collaboration: Work closely with other members of the Data Team and cross-functional teams to support various data-related projects.\n\n* Quality Assurance: Implement and monitor data quality checks to ensure accuracy and reliability of data.\n\n* Cloud Technologies: While the focus is on data modeling, familiarity with cloud technologies and platforms (e.g., AWS) is a plus.\n\n\n\n\n\nWhat Youโll Bring\n\n\n* 3+ years of experience working within data engineering, analytics engineering and/or similar functions.\n\n* Experience collaborating with business stakeholders to build and support data projects.\n\n* Experience with database languages, indexing, and partitioning to handle large volumes of data and create optimized queries and databases.\n\n* Experience in file manipulation and organization, such as Parquet.\n\n* Experience with the "ETL/ELT as code" approach for building Data Marts and Data Warehouses.\n\n* Experience with cloud infrastructure and knowledge of solutions like Athena, Redshift Spectrum, and SageMaker.\n\n* Experience with Apache Airflow for creating DAGs and various purposes.\n\n* Critical thinking for evaluating contexts and making decisions about delivery formats that meet the companyโs needs (e.g., materialized views, etc.).\n\n* Knowledge in development languages, preferably Python or Spark.\n\n* Knowledge in SQL.\n\n* Knowledge of S3, Redshift, and PostgreSQL.\n\n* Experience in developing highly complex historical transformations. Utilization of events is a plus.\n\n* Experience with ETL orchestration and updates.\n\n* Experience with error and inconsistency alerts, including detailed root cause analysis, correction, and improvement proposals.\n\n* Experience with documentation and process creation.\n\n* Knowledge of data pipeline and LakeHouse technologies is a plus.\n\n\n\n\n\nWhat Youโll Bring \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Cloud and Engineer jobs that are similar:\n\n
$62,500 — $117,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSรฃo Paulo, Sรฃo Paulo, Brazil
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe currently seeking a Senior Data Engineer with 5-7 yearsโ experience. The ideal candidate would have the ability to work independently within an AGILE working environment and have experience working with cloud infrastructure leveraging tools such as Apache Airflow, Databricks, DBT and Snowflake. A familiarity with real-time data processing and AI implementation is advantageous. \n\n\n\nResponsibilities:\n* Design, build, and maintain scalable and robust data pipelines to support analytics and machine learning models, ensuring high data quality and reliability for both batch & real-time use cases.\n* Design, maintain, optimize data models and data structures in tooling such as Snowflake and Databricks. \n* Leverage Databricks for big data processing, ensuring efficient management of Spark jobs and seamless integration with other data services.\n* Utilize PySpark and/or Ray to build and scale distributed computing tasks, enhancing the performance of machine learning model training and inference processes.\n* Monitor, troubleshoot, and resolve issues within data pipelines and infrastructure, implementing best practices for data engineering and continuous improvement.\n* Diagrammatically document data engineering workflows. \n* Collaborate with other Data Engineers, Product Owners, Software Developers and Machine Learning Engineers to implement new product features by understanding their needs and delivery timeously. \n\n\n\nQualifications:\n* Minimum of 3 years experience deploying enterprise level scalable data engineering solutions.\n* Strong examples of independently developed data pipelines end-to-end, from problem formulation, raw data, to implementation, optimization, and result.\n* Proven track record of building and managing scalable cloud-based infrastructure on AWS (incl. S3, Dynamo DB, EMR). \n* Proven track record of implementing and managing of AI model lifecycle in a production environment.\n* Experience using Apache Airflow (or equivalent) , Snowflake, Lucene-based search engines.\n* Experience with Databricks (Delta format, Unity Catalog).\n* Advanced SQL and Python knowledge with associated coding experience.\n* Strong Experience with DevOps practices for continuous integration and continuous delivery (CI/CD).\n* Experience wrangling structured & unstructured file formats (Parquet, CSV, JSON).\n* Understanding and implementation of best practices within ETL end ELT processes.\n* Data Quality best practice implementation using Great Expectations.\n* Real-time data processing experience using Apache Kafka Experience (or equivalent) will be advantageous.\n* Work independently with minimal supervision.\n* Takes initiative and is action-focused.\n* Mentor and share knowledge with junior team members.\n* Collaborative with a strong ability to work in cross-functional teams.\n* Excellent communication skills with the ability to communicate with stakeholders across varying interest groups.\n* Fluency in spoken and written English.\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, Senior, Junior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nVerana Health, a digital health company that delivers quality drug lifecycle and medical practice insights from an exclusive real-world data network, recently secured a $150 million Series E led by Johnson & Johnson Innovation โ JJDC, Inc. (JJDC) and Novo Growth, the growth-stage investment arm of Novo Holdings. \n\nExisting Verana Health investors GV (formerly Google Ventures), Casdin Capital, and Brook Byers also joined the round, as well as notable new investors, including the Merck Global Health Innovation Fund, THVC, and Breyer Capital.\n\nWe are driven to create quality real-world data in ophthalmology, neurology and urology to accelerate quality insights across the drug lifecycle and within medical practices. Additionally, we are driven to advance the quality of care and quality of life for patients. DRIVE defines our internal purpose and is the galvanizing force that helps ground us in a shared corporate culture. DRIVE is: Diversity, Responsibility, Integrity, Voice-of-Customer and End-Results. Click here to read more about our culture and values. \n\n Our headquarters are located in San Francisco and we have additional offices in Knoxville, TN and New York City with employees working remotely in AZ, CA, CO, CT, FL, GA, IL, LA, MA, NC, NJ, NY, OH, OR, PA, TN, TX, UT , VA, WA, WI. All employees are required to have permanent residency in one of these states. Candidates who are willing to relocate are also encouraged to apply. \n\nJob Title: Data Engineer\n\nJob Intro:\n\nAs a Data/Software Engineer at Verana Health, you will be responsible for extending a set of tools used for data pipeline development. You will have strong hands-on experience in design & development of cloud services. Deep understanding of data quality metadata management, data ingestion, and curation. Generate software solutions using Apache Spark, Hive, Presto, and other big data frameworks. Analyzing the systems and requirements to provide the best technical solutions with regard to flexibility, scalability, and reliability of underlying architecture. Document and improve software testing and release processes across the entire data team.\n\nJob Duties and Responsibilities:\n\n\nArchitect, implement, and maintain scalable data architectures to meet data processing and analytics requirements utilizing AWS and Databricks\n\nAbility to troubleshoot complex data issues and optimize pipelines taking into consideration data quality, computation and cost.\n\nCollaborate with cross-functional teams to understand and translate data needed into effective data pipeline solutions\n\nDesign solutions to solving problems related to ingestion and curation of highly variable data structures in a highly concurrent cloud environment.\n\nRetain metadata for tracking of execution details to reproducibility and providing operational metrics.\n\nCreate routines to add observability and alerting to the health of pipelines.\n\nEstablish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.\n\nResearch , perform proof-of-concept and leverage performant database technologies(like Aurora Postgres, Elasticsearch, Redshift) to support end user applications that need sub second response time.\n\nParticipate in code reviews.\n\nProactive in staying updated with industry trends and emerging technologies in data engineering.\n\nDevelopment of data services using RESTful APIโs which are secure(oauth/saml), scalable(containerized using dockers), observable (using monitoring tools like datadog, elk stack), documented using OpenAPI/Swagger by using frameworks in python/java and automated CI/CD deployment using Github actions.\n\nDocument data engineering processes , architectures, and configurations.\n\n\n\n\nBasic Requirements:\n\n\nA minimum of a BS degree in computer science, software engineering, or related scientific discipline.\n\nA minimum of 3 years of experience in software development\n\nStrong programming skills in languages such as Python/Pyspark, SQL\n\nExperience with Delta lake, Unity Catalog, Delta Sharing, Delta live tables(DLT)\n\nExperience with data pipeline orchestration tools - Airflow, Databricks Workflows\n\n1 year of experience working in AWS cloud computing environment, preferably with Lambda, S3, SNS, SQS\n\nUnderstanding of Data Management principles(governance, security, cataloging, life cycle management, privacy, quality)\n\nGood understanding of relational databases.\n\nDemonstrated ability to build software tools in a collaborative, team oriented environment that are product and customer driven.\n\nStrong communication and interpersonal skills\n\nUtilizes source code version control.\n\nHands-on experience with Docker containers and container orchestration.\n\n\n\n\nBonus:\n\n\nHealthcare and medical data experience is a plus.\n\nAdditional experience with modern compiled programming languages (C++, Go, Rust)\n\nExperience building HTTP/REST APIs using popular frameworks\n\nBuilding out extensive automated test suites\n\n\n\n\nBenefits:\n\nWe provide health, vision, and dental coverage for employees\n\n\n\n\n\n\n\nVerana pays 100% of employee insurance coverage and 70% of family\n\nPlus an additional monthly $100 individual / $200 HSA contribution with HDHP\n\n\n\n\n\n\n\n\nSpring Health mental health support\n\nFlexible vacation plans\n\nA generous parental leave policy and family building support through the Carrot app\n\n$500 learning and development budget\n\n$25/wk in Doordash credit\n\nHeadspace meditation app - unlimited access\n\nGympass - 3 free live classes per week + monthly discounts for gyms like Soulcycle\n\n\n\n\nFinal note:\n\nYou do not need to match every listed expectation to apply for this position. Here at Verana, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.\n\n \n\n \n\n \n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Docker, Testing, Cloud and Engineer jobs that are similar:\n\n
$70,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.