BambooHR is hiring a
Remote Senior Data Engineer
\nWhat You'll Do\n\nAs a Senior Data Engineer on the data platform team, weโll rely on your expertise across multiple disciplines to develop, deploy and support data systems, data pipelines, data lakes, and lakehouses. Your ability to automate, performance tune, and scale the data platform will be key to your success.\n\nYour initial areas of focus will include:\n\n\nCollaborate with stakeholders to make effective use of core data assets\n\nWith Spark and Pyspark libraries, load both streaming and batched data\n\nEngineer lakehouse models to support defined data patterns and use cases\n\nLeverage a combination of tools, engines, libraries, and code to build scalable data pipelines\n\nWork within an IT managed AWS account and VPC to stand up and maintain data platform development, staging, and production environments\n\nDocumentation of data pipelines, cloud infrastructure, and standard operating procedures\n\nExpress data platform cloud infrastructure, services, and configuration as code\n\nAutomate load, scaling, and performance testing of data platform pipelines and infrastructure\n\nMonitor, operate, and optimize data pipelines and distributed applications\n\nHelp ensure appropriate data privacy and security\n\nAutomate continuous upgrades and testing of data platform infrastructure and services\n\nBuild data pipeline unit, integration, quality, and performance tests\n\nParticipate in peer code reviews, code approvals, and pull requests\n\nIdentify, recommend, and implement opportunities for improvement in efficiency, resilience, scale, security, and performance\n\n\n\n\n \n\nWhat You Need to Get the Job Done (if you donโt have all, apply anyway!)\n\n\nExperience developing, scaling, and tuning data pipelines in Spark with PySpark\n\nUnderstanding of data lake, lakehouse, and data warehouse systems, and related technologies\n\nKnowledge and understanding of data formats, data patterns, models, and methodologies\n\nExperience storing data objects in hadoop or hadoop like environments such as S3\n\n\n\n\n\nDemonstrated ability to deploy, configure, secure, performance tune, and scale EMR and Spark \n\nExperience working with streaming technologies such as Kafka and Kinesis\n\nExperience with the administration, configuration, performance tuning, and security of database engines like Snowflake, Databricks, Redshift, Vertica, or Greenplum\n\nAbility to work with cloud infrastructure including resource scaling, S3, RDS, IAM, security groups, AMIs, cloudwatch, cloudtrail, and secrets manager\n\nUnderstanding of security around cloud infrastructure and data systems\n\nGit-based team coding workflows\n\n\n\n\n \n\nBonus Skills (Not Required, So Apply Anyway!)\n\n\nExperience deploying and implementing lakehouse technologies such as Hudi, Iceberg, and Delta\n\nExperience with Flink, Presto, Dremio, Databricks, or Kubernetes\n\nExperience with expressing infrastructure as code leveraging tools like Terraform\n\nExperience and understanding of a zero trust security framework\n\nExperience developing CI/CD pipelines for automated testing and code deployment\n\nExperience with QA and test automation\n\nExposure to visualization tools like Tableau\n\n\n\n\n \n\nBeyond the technical skills, weโre looking for individuals who are:\n\n\nClear communicators with team members and stakeholders\n\nAnalytical and perceptive of patterns\n\nCreative in coding\n\nDetail-oriented and persistent\n\nProductive in a dynamic setting\n\n\n\n\nIf you love to learn, youโll be in good company. Youโll likely have a Bachelorโs degree in computer science, information systems, or equivalent working experience. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Education, Sales, Accounting, Finance, Senior, Design, Scrum, Mobile, Junior, Engineer, Illustrator, Digital Nomad, SaaS, Marketing, Testing and Cloud jobs that are similar:\n\n
$62,500 — $120,000/year\n
\n\n#Location\nCharleston, South Carolina, United States