\nAbout HighLevel:\nHighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.\n\n\nOur Website - https://www.gohighlevel.com/\nYouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g\nBlog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/\n\n\nOur Customers:\nHighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.\n\n\nScale at HighLevel:\nWe operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.\n\n\nAbout the Role:\nWe are seeking a talented and motivated data engineer to join our team who will be responsible for designing, developing, and maintaining our data infrastructure and developing backend systems and solutions that support real-time data processing, large-scale event-driven architectures, and integrations with various data systems. This role involves collaborating with cross-functional teams to ensure data reliability, scalability, and performance. The candidate will work closely with data scientists, analysts and software engineers to ensure efficient data flow and storage, enabling data-driven decision-making across the organisation.\n\n\n\nResponsibilities:\n* Software Engineering Excellence: Write clean, efficient, and maintainable code using JavaScript or Python while adhering to best practices and design patterns\n* Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures\n* Data Pipeline Development: Design, develop and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources\n* Data Storage & Warehousing: Build and optimize databases, data lakes and data warehouses (e.g. Snowflake) for high-performance querying\n* Data Integration: Work with APIs, batch and streaming data sources to ingest and transform data\n* Performance Optimization: Optimize queries, indexing and partitioning for efficient data retrieval\n* Collaboration: Work with data analysts, data scientists, software developers and product teams to understand requirements and deliver scalable solutions\n* Monitoring & Debugging: Set up logging, monitoring, and alerting to ensure data pipelines run reliably\n* Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose innovative solutions to address them\n\n\n\nRequirements:\n* 3+ years of experience in software development\n* Education: Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field\n* Strong Problem-Solving Skills: Ability to debug and optimize data processing workflows\n* Programming Fundamentals: Solid understanding of data structures, algorithms, and software design patterns\n* Software Engineering Experience: Demonstrated experience (SDE II/III level) in designing, developing, and delivering software solutions using modern languages and frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java)\n* ETL Tools & Frameworks: Experience with Airflow, dbt, Apache Spark, Kafka, Flink or similar technologies\n* Cloud Platforms: Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3, Glue, Redshift)\n* Databases & Warehousing: Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES)\n* Version Control & CI/CD: Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines for deployment\n* Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment\n* Experience with data visualization tools (e.g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines and devops practices are a plus\n\n\n\n\n\n\nEEO Statement:\nThe company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.\n\n\n#LI-Remote #LI-NJ1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, JavaScript, Cloud, API, Marketing, Sales, Engineer and Backend jobs that are similar:\n\n
$60,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nDelhi
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.\nYou Will:\n\n\n* Architect and develop data pipelines to optimize performance, quality, and scalability\n\n* Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources\n\n* Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake\n\n* Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance\n\n* Orchestrate sophisticated data flow patterns across a variety of disparate tooling\n\n* Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics\n\n* Partner with the rest of the Data Platform team to set best practices and ensure the execution of them\n\n* Partner with the analytics engineers to ensure the performance and reliability of our data sources\n\n* Partner with machine learning engineers to deploy predictive models\n\n* Partner with the legal and security teams to build frameworks and implement data compliance and security policies\n\n* Partner with DevOps to build IaC and CI/CD pipelines\n\n* Support code versioning and code deployments for data Pipelines\n\n\n\nYou Have:\n\n\n* 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages\n\n* Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed\n\n* Demonstrated experience writing complex, highly optimized SQL queries across large data sets\n\n* Experience with cloud technologies such as AWS and/or Google Cloud Platform\n\n* Experience with Databricks platform\n\n* Experience with IaC technologies like Terraform\n\n* Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres\n\n* Experience building event streaming pipelines using Kafka/Confluent Kafka\n\n* Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker\n\n* Experience with containers and container orchestration tools such as Docker or Kubernetes\n\n* Experience with Machine Learning & MLOps\n\n* Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)\n\n* Thorough understanding of SDLC and Agile frameworks\n\n* Project management skills and a demonstrated ability to work autonomously\n\n\n\nNice to Have:\n\n\n* Experience building data models using dbt\n\n* Experience with Javascript and event tracking tools like GTM\n\n* Experience designing and developing systems with desired SLAs and data quality metrics\n\n* Experience with microservice architecture\n\n* Experience architecting an enterprise-grade data platform\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Testing, DevOps, JavaScript, Cloud, API, Senior, Legal and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Stacktical and want to re-open this job? Use the edit link in the email when you posted the job!
Stacktical is a Predictive Scalability Testing platform.\nIt ensures our customers design and ship softwares that always scale to the maximum of their ability and with minimum footprint.\nThe Stacktical Site Reliability Engineer is responsible for helping our customers engineer CI/CD pipeline around system testing practices that involve Stacktical.\nLike the rest of the team, they also actively participate in building the Stacktical platform itself.\nWe are looking for a skilled DevOps and Site Reliability Engineer, expert in Scalability, that is excited about the vision of using Predictive Analytics and AI to reinvent the field.\nWith a long-standing passion for automating your work and the work of others, you also understand how Software as a Service is increasingly empowering companies to do just that.\nYou can justify previous experiences in startups and youโre capable of working remotely, with great efficiency, in fast-paced, demanding environments. Ideally, youโd have a proven track record of working remotely for 2+ years.\nNeedless to say, you fully embrace the working philosophy of digital nomadism weโre developing at Stacktical and both the benefits and responsibilities that come with it.\nYour role and responsibilities includes the following :\n- Architecture, implementation and maintenance of server clusters, API and microservices, including critical production environments, in Cloud and other hosting configurations (dedicated, vps and shared).\n- Ensure the availability, performance and scalability of applications in respect of proven design and architecture best practices.\n- Design and execute Scalability strategies that ensure the scalability and the elasticity of the infrastructure.\n- Manage a portfolio of Softwares, their Development Life Cycle and optimize their Continuous Integration and Delivery workflows (CI/CD).\n- Automate the Quality & Reliability Testing of applications (Unit Tests, Integration Tests, E2E Tests, Performance and Scalability Tests).\n## Skills we are looking for\n- A 50-50 mix between Software Development and System Administration experience\n- Proficiency in Node.js, Python, R, Erlang (Elixir) and / or Go\n- Hands on experience in NoSQL / SQL database optimization (slow queries indexing, sharding, clustering)\n- Hands on experience in administering high availability and high performance environments, as well as managing large-scale deployments of traffic-heavy applications.\n- Extensive knowledge of Cloud Computing concepts, technologies and providers (Amazon AWS, Google Cloud Platform, Microsoft Azureโฆ).\n- A strong ability to design and execute cutting edge System Testing strategies (smoke tests, performance/load tests, regression tests, capacity tests).\n- Excellent understanding of Scalability processes and techniques.\n- Good grasp of Scalability, Elasticity concepts and creative Auto Scaling strategies (Auto Scaling Groups management, API-based scheduling).\n- Hands on experience with Docker and Docker orchestration tools like Kubernetes and their corresponding provider management services (Amazon ECS, Google Container Engine, Azure Container Service...).\n- Hands on experience with leading Infrastructure as Code SCM tools like Terraform and Ansible\n- Proven ability to work remotely with teams of various sizes in same/different timezones, from anywhere and still remain highly motivated, productive, and organized.\n- Excellent English communication skills, including verbal, written, and presentation. Great email and Instant Messaging (Slack) proficiency.\nWeโre looking for a self learner always willing to step out her/his comfort zone to become better. An upright individual, ready to write the first and many chapters of the Stacktical story with us.\n## Life at our virtual office\nOur headquarters are in Paris but our offices and our clients are everywhere in the World.\nWeโre a fully distributed company with a 100% remote workforce. So pretty much everything happens on Slack and various other collaborative tools.\n## Remote work at Stacktical\nRemote work at Stacktical requires you to forge a contract with the Stacktical company, using your own billing structure.\nThat means you would either need to own a company or leverage a compatible legal status.\nLabour laws can be largely different from a country to another and we are not (yet) in a position to comply with the local requirements of all our employees.\nJust because you will be a contractor doesnโt make you less of a fully-fledged employee of Stacktical. In fact, even our founders are contractors too.\n## Compensation Package\n#### Fixed-price contract\nYour contract fixed-price is engineered around your expectations, our possibilities and the overall implications of remote work.\nLetโs have a transparent chat about it.\n#### Stock Options\nYes, joining Stacktical means you are entrusted to own part of the company. \n\nPlease mention the words **VEHICLE ORBIT AUNT** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4yNDM=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, JavaScript, Cloud, Erlang, Python, Node, API, Admin, Engineer, Apache, Nginx, Sys Admin, Docker, English, NoSQL, Microsoft and Legal jobs that are similar:\n\n
$70,000 — $120,000/year\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.