\nAbout HighLevel:\nHighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.\n\n\nOur Website - https://www.gohighlevel.com/\nYouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g\nBlog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/\n\n\nOur Customers:\nHighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.\n\n\nScale at HighLevel:\nWe operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.\n\n\nAbout the Role:\nWe are seeking a talented and motivated data engineer to join our team who will be responsible for designing, developing, and maintaining our data infrastructure and developing backend systems and solutions that support real-time data processing, large-scale event-driven architectures, and integrations with various data systems. This role involves collaborating with cross-functional teams to ensure data reliability, scalability, and performance. The candidate will work closely with data scientists, analysts and software engineers to ensure efficient data flow and storage, enabling data-driven decision-making across the organisation.\n\n\n\nResponsibilities:\n* Software Engineering Excellence: Write clean, efficient, and maintainable code using JavaScript or Python while adhering to best practices and design patterns\n* Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures\n* Data Pipeline Development: Design, develop and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources\n* Data Storage & Warehousing: Build and optimize databases, data lakes and data warehouses (e.g. Snowflake) for high-performance querying\n* Data Integration: Work with APIs, batch and streaming data sources to ingest and transform data\n* Performance Optimization: Optimize queries, indexing and partitioning for efficient data retrieval\n* Collaboration: Work with data analysts, data scientists, software developers and product teams to understand requirements and deliver scalable solutions\n* Monitoring & Debugging: Set up logging, monitoring, and alerting to ensure data pipelines run reliably\n* Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose innovative solutions to address them\n\n\n\nRequirements:\n* 3+ years of experience in software development\n* Education: Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field\n* Strong Problem-Solving Skills: Ability to debug and optimize data processing workflows\n* Programming Fundamentals: Solid understanding of data structures, algorithms, and software design patterns\n* Software Engineering Experience: Demonstrated experience (SDE II/III level) in designing, developing, and delivering software solutions using modern languages and frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java)\n* ETL Tools & Frameworks: Experience with Airflow, dbt, Apache Spark, Kafka, Flink or similar technologies\n* Cloud Platforms: Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3, Glue, Redshift)\n* Databases & Warehousing: Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES)\n* Version Control & CI/CD: Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines for deployment\n* Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment\n* Experience with data visualization tools (e.g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines and devops practices are a plus\n\n\n\n\n\n\nEEO Statement:\nThe company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.\n\n\n#LI-Remote #LI-NJ1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, JavaScript, Cloud, API, Marketing, Sales, Engineer and Backend jobs that are similar:\n\n
$60,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nDelhi
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Jeeng and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nIf you join us, what will you do? \n\nAs a Senior Software Engineer at Jeeng, you will have a great impact on the product!\n\nYou will work in the AWS cloud on Scala auction applications, Java data processing projects, and Python serverless data processing and machine learning projects. These services work together to generate on the order of millions of ad bids per second, reliably and durably store user actions taken on these ads, and then close the loop by training new machine learning models optimizing for user engagement.\n\nSenior Software Engineer Responsibilities:\n\n\n* Build and maintain the software and infrastructure powering a real-time advertising network.\n\n* Design and implement strategies to increase the availability and reliability of the advertising network.\n\n* Maintain and extend monitoring and instrumentation needed to manage the health of the advertising network.\n\n* Integrate with and improve our DevOps systems running continuous integrations and rapid deployments.\n\n* Respond to service outages in a timely manner.\n\n* Integrate closely with data engineering team members to guarantee the persistence and correctness of auction events.\n\n\n\n\nIn order to be great at your job, you’ll need to have:\n\n\n* 5+ years experience in similar roles or equivalencies\n\n* Experience with Scala, Java, and Python\n\n* AWS experience\n\n* Web, HTTP, Internet Computing knowledge\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Engineer, Developer, Digital Nomad, DevOps, Java, Serverless, Cloud, Python and Scala jobs that are similar:\n\n
$70,000 — $125,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Shopify and want to re-open this job? Use the edit link in the email when you posted the job!
Shopifyโs platform is growing at an incredible rate, generating vast amounts of data. We leverage the cloud in order to move fast and produce great results. While we operate a comprehensive data stack, weโve still got a lot of work to do, and thatโs where you can lean in. We face many challenges head-on to ensure that our data moves seamlessly throughout our infrastructure in a safe and secure manner, while providing new insights and features. \n\nWeโre looking for engineers with a background in infrastructure, security and cloud technologies, DevOps, and an SRE mindset to collaborate on these challenges and deploy platform services at a very large scale. Youโll need a curiosity of how our systems work under the hood, and how we can leverage them to grow and protect the hundreds of thousands of entrepreneurs that use Shopify.\n\n**You'll be working on:**\n* Ensuring that our data platform stays online, secure, and performant\n* Creating and deploying infrastructure around specific security requirements\n* Developing configuration management and automation tools\n* Building out our monitoring and analytics tooling to get insights about our platform usage\n* Building a world-class data analytics platform to help both internal and external customers, focusing on making the lives of our hundreds of thousands of merchants better\n\n**Youโll need to have:**\n* A systems-level approach; youโve worked across the entire stack, from the OS all the way up to the application layer\n* Cloud Platform experience (GCP/AWS/Azure)\n* Technical leadership experience mentoring other engineers\n* Comfort with multiple languages; youโre a low-level generalist who is comfortable with multiple languages such as Go, Python and languages which target the JVM like Java, Scala or Kotlin\n* A passion for troubleshooting and finding the solution for the long-term; you donโt accept the easy solution as the only solution, and will dig to ensure that we put the long-term benefit of our merchants and stakeholders first\n* Well-founded opinions about writing code and approaching problems; youโre comfortable with automated testing, code refactoring, and software engineering best practices\n* Excitement for working with a remote team; you value collaborating on problems, asking questions, delivering feedback, and supporting others in their goals whether they are in your vicinity or entire cities apart\n\n**It'd be nice if you have experience:**\n* Working with data at petabyte scale\n* Securing a data platform and integrating security best practices at all phases of the development lifecycle\n* Implementing privacy compliance in a data stack - for example, CCPA, GDPR\n* Working with a modern data stack, including Spark, Beam, Presto, Hive, Airflow, and other big data tools and frameworks\n* Developing and orchestrating large Docker deployments with Kubernetes\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous people, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.\n\nShopify is now permanently remote and working towards a future that is digital by default. Learn more about what this can mean for you. \n\nPlease mention the words **ALL USE LOAN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Platform.sh and want to re-open this job? Use the edit link in the email when you posted the job!
\nMission\n\nJoin our team as a Cloud Developer Experience Engineer and improve the lives of thousands of developers using our product. Have you ever attempted to use a piece of software only to become frustrated because of missing documentation, bad examples, unintuitive UX/DX? This is your chance to help Platform.sh make sure that no developer ever has that experience with our software.\n\nYou’ll be on a small autonomous team that focuses entirely on making developers passionate about our product. You’ll be exposed to a wide variety of software running on multiple stacks, and will get to tinker with all the new shiny things that come along. You’ll then translate your learnings into templates, learning and training materials, workshops, and conference talks to make sure that the largest number of developers benefit from your wisdom. You’ll help us make Platform.sh better by gathering feedback and best-practices from our users, for whom you will be a fierce advocate.\n\nYou’ll work remotely.\n\nResponsibilities\n\n\n* Critically evaluate the developer experience of Platform.sh tools and products, with a focus on Java developers and workflows\n\n* Maintain project starting templates in various stacks and languages\n\n* Publish how-to articles and tutorials\n\n* Prepare and give product demos, online and at conferences (estimated traveling on this position is less than 20% of working time)\n\n* Run training workshops, online and at conferences\n\n* General problem solving and technical tinkering\n\n\n\n\nQualifications\n\n\n* 3+ years experience working as a Software Engineer\n\n* Must have:\n\n\n\n* Expertise in writing and deploying Java web applications - specifically you have some experience with J2EE on Tomcat and WildFly, plus Spring Boot knowledge\n\n\n\n* Great to have:\n\n\n\n* Familiarity with the main web scripting languages: Ruby, Golang, Node.js, Python or PHP (yeah, we just called Golang a scripting language, the world won’t end)\n\n\n\n* Nice to have:\n\n\n\n* Experience with other JVM languages very appreciated, especially Scala and Clojure\n\n* Erlang / Elixir chops\n\n* A secret crush on Rust (we don’t do any Rust, but some of us would really love to do everything in Rust)\n\n\n\n* You should have a good level of networking and system knowledge. Specifically you have at least a good basic understanding of containers and you have an excellent understanding of HTTP (please don’t submit your CV if you can’t tell a 301 from a 404, and we do expect you to at least know about TLS).\n\n* Excellent knowledge of Git: you rebase like a god and you do not lose consciousness when you hear “bisect”.\n\n* You should have a good grasp of relational databases (Postgres / MySQL), caches (Redis), Search-Engines (Elastic Search, but if you “only" have some Solr chops, we won’t complain), Message Queues (any really, but we provide RabbitMQ and Kafka) and how they fit into an architecture.\n\n* You should have a good understanding of deployment workflows and some of the DevOps tooling (stuff like Puppet and Chef, anything from Hashicorp).\n\n* Published technical articles or presentations.\n\n* Experience giving presentations or trainings, eg. at conferences.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, Developer, Digital Nomad, DevOps, Java, PHP, Python, Scala and Golang jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for GrapheneDB and want to re-open this job? Use the edit link in the email when you posted the job!
[GrapheneDB](https://www.graphenedb.com) is the first cloud hosting provider to offer Neo4j graph databases as a service. We proudly manage thousands of Neo4j database instances, catering individual developers working on pet projects to large companies with challenging workloads and reliability requirements. \n\nOur small team is fully distributed. We are spread out all over Spain: Gran Canaria, Granada, Madrid, and Mรกlaga. We also have an office in Las Palmas de Gran Canaria, where we meet for an all-hands on-site on a quarterly basis.\n\nWe value pragmatism, accountability, early feedback and helping others. We write and support our own production code. This means that DevOps is a real thing for us. In short: we are team players.\n\nAs for our architecture and technology, Fran and Nando, two of our current team members, gave a presentation on our backend and infrastructure stack last year at the Madrid Scala Meetup. [Have a look!](https://www.meetup.com/es/Scala-Programming-Madrid/events/235570280/).\n\n### What You'll Be Doing\n- Work closely with front-end developers, designers, and product managers, using code and infrastructure to deliver value to new and existing customers.\n- Monitor, investigate and solve errors in infrastructure, our distributed systems and customer deployments, and collaborate on technical support cases.\n- Help maintain and improve our monitoring and automated issue resolution system.\n- Be part of the On-Call Team (after being introduced to our specific technologies and processes) and respond to incidents.\n- Create incident post-mortems, monitor and communicate stats on system reliability.\n- Manage your own time and focus on the continuous delivery of projects.\n- We don't expect you to know everything. When working remotely this can be a challenge, so we encourage our team to be constantly learning, failing and asking.\n\n### Requirements\n- Strong communication skills and ability to work cross-functionally (in English)\n- Solid knowledge in Linux/UN*X, system monitoring, Infrastructure-as-a-Service and configuration management.\n- Experience operating databases (monitoring, backups, debugging).\n- Scripting skills in Python and/or Ruby\n- Working knowledge in Scala or willing to learn it\n\n#### Bonus Points\n- Experience with Neo4j or other NoSQL technologies such as MongoDB, Elastic and Redis.\n- Experience with cloud computing (AWS, Azure, GCP) and containerization (Docker, LXC).\n- Familiarity with Distributed Systems programming\n- Experience in any of these technologies: Scala, Akka, Rust, Netty, Prometheus, Terraform, Ansible, Kafka.\n- JVM monitoring and (practical) tuning\n- Experience with Git/Github PR & code review workflow\n\n### What we offer\n- A whole stack of technologies to learn, support and develop. From end to end.\n- Budget for personal development: books, training, conferences, ...\n- Competitive salary\n- Remote work\n\n### Timeframe and commitment\n- Start ASAP\n- This is a full-time position\n- We are only seeking candidates in European timezones (UTC-2 to UTC+4). \n\nPlease mention the words **BUYER CURTAIN SYMPTOM** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, NoSQL, Scala, Admin, Engineer, Sys Admin, Cloud, Python and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.