\nAbout HighLevel:\nHighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.\n\n\nOur Website - https://www.gohighlevel.com/\nYouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g\nBlog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/\n\n\nOur Customers:\nHighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.\n\n\nScale at HighLevel:\nWe operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.\n\n\nAbout the Role:\nWe are seeking a talented and motivated data engineer to join our team who will be responsible for designing, developing, and maintaining our data infrastructure and developing backend systems and solutions that support real-time data processing, large-scale event-driven architectures, and integrations with various data systems. This role involves collaborating with cross-functional teams to ensure data reliability, scalability, and performance. The candidate will work closely with data scientists, analysts and software engineers to ensure efficient data flow and storage, enabling data-driven decision-making across the organisation.\n\n\n\nResponsibilities:\n* Software Engineering Excellence: Write clean, efficient, and maintainable code using JavaScript or Python while adhering to best practices and design patterns\n* Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures\n* Data Pipeline Development: Design, develop and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources\n* Data Storage & Warehousing: Build and optimize databases, data lakes and data warehouses (e.g. Snowflake) for high-performance querying\n* Data Integration: Work with APIs, batch and streaming data sources to ingest and transform data\n* Performance Optimization: Optimize queries, indexing and partitioning for efficient data retrieval\n* Collaboration: Work with data analysts, data scientists, software developers and product teams to understand requirements and deliver scalable solutions\n* Monitoring & Debugging: Set up logging, monitoring, and alerting to ensure data pipelines run reliably\n* Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose innovative solutions to address them\n\n\n\nRequirements:\n* 3+ years of experience in software development\n* Education: Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field\n* Strong Problem-Solving Skills: Ability to debug and optimize data processing workflows\n* Programming Fundamentals: Solid understanding of data structures, algorithms, and software design patterns\n* Software Engineering Experience: Demonstrated experience (SDE II/III level) in designing, developing, and delivering software solutions using modern languages and frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java)\n* ETL Tools & Frameworks: Experience with Airflow, dbt, Apache Spark, Kafka, Flink or similar technologies\n* Cloud Platforms: Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3, Glue, Redshift)\n* Databases & Warehousing: Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES)\n* Version Control & CI/CD: Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines for deployment\n* Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment\n* Experience with data visualization tools (e.g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines and devops practices are a plus\n\n\n\n\n\n\nEEO Statement:\nThe company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.\n\n\n#LI-Remote #LI-NJ1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, JavaScript, Cloud, API, Marketing, Sales, Engineer and Backend jobs that are similar:\n\n
$60,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nDelhi
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
About Usย Hawk is the leading provider of AI-supported anti-money laundering and fraud detection technology. Banks and payment providers globally are using Hawkโs powerful combination of traditional rules and explainable AI to improve the effectiveness of their AML compliance and fraud prevention by identifying more crime while maximizing efficiency by reducing false positives. With our solution, we are playing a vital role in the global fight against Money Laundering, Fraud, or the financing of terrorism. We offer a culture of mutual trust, support and passion โ while providing individuals with opportunities to grow professionally and make a difference in the world.ย Your MissionAs a DevOps Engineer, you will play a crucial role in ensuring the scalability, security, and reliability of our AI-driven financial crime prevention platform. You will automate cloud infrastructure, implement monitoring and observability solutions, and build secure, scalable CI/CD pipelines. Your work will directly contribute to maintaining high availability for a platform that fights financial crime 24/7. This role is based on the East Coast, U.S. and requires expertise in cloud infrastructure, automation, security best practices, and continuous integration/deployment (CI/CD).Your Responsibilities* Provision, manage, and scale multi-cloud environments using Infrastructure as Code (IaC) (e.g., Terraform).\n* Maintain high availability (HA), fault tolerance, and least-privilege security practices, while optimizing cloud costs.\n* Design and maintain developer-friendly CI/CD workflows, container templates, and reusable artifacts for seamless software delivery.\n* Implement real-time monitoring, alerting, and observability solutions (e.g., Elastic Stack, Prometheus, Grafana, CloudWatch) to proactively detect and resolve issues.\n* Implement and enforce cloud security best practices, identify and mitigate vulnerabilities, and ensure compliance with data protection regulations.\n* Provide technical guidance to clients running Hawkโs platform in their own VPC environments, supporting onboarding and integration.\n* Develop structured documentation for cloud architectures, best practices, and deployment processes, ensuring seamless team collaboration.\n\nYour Profile* 5+ years of experience in DevOps, Site Reliability Engineering (SRE), or Cloud Engineering roles.\n* Bachelorโs degree in Computer Science, Engineering, or a related field (or equivalent experience).\n* Strong expertise in Kubernetes, containerized applications, and cloud-native technologies.\n* Hands-on experience with AWS or GCP and their core services.\n* Proficiency with Terraform and Infrastructure as Code (IaC) methodologies.\n* Experience with CI/CD tools such as GitLab CI, GitHub Actions, or similar.\n* Strong knowledge of observability and monitoring tools (e.g., Elastic Stack, Prometheus, Grafana, CloudWatch).\n* Solid understanding of cloud security principles, least-privilege access, and automated security policies.\n* Ability to diagnose complex technical challenges and provide scalable, secure solutions.\n* Strong communication and collaboration skills; able to work effectively in a remote, cross-functional environment.\n* Comfortable in a fast-paced, hands-on role, with a willingness to get your hands dirty and embrace feedback for continuous improvement.\n\nPreferred Qualifications* Experience in cybersecurity, penetration testing, and cloud compliance.\n* Familiarity with Java Spring Boot & Apache Kafka.\n* Experience in 24/7 uptime environments with on-call rotations.\n* Knowledge of big data systems (PostgreSQL, S3/Azure Blob Storage, Elasticsearch).\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Java, Cloud and Engineer jobs that are similar:\n\n
$55,000 — $90,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Government Employees Insurance Company is hiring a
Remote Staff Engineer
Our Senior Staff Engineer works with our Staff and Sr. Engineers to innovate and build new systems, improve and enhance existing systems and identify new opportunities to apply your knowledge to solve critical problems. You will lead the strategy and execution of a technical roadmap that will increase the velocity of delivering products and unlock new engineering capabilities. The ideal candidate is a self-starter that has deep technical expertise in their domain. Position Responsibilities As a Senior Staff Engineer, you will: Provide technical leadership to multiple areas and provide technical and thought leadership to the enterprise Collaborate across team members and across the tech organization to solve our toughest problems Develop and execute technical software development strategy for a variety of domains Accountable for the quality, usability, and performance of the solutions Utilize programming languages like C#, Java, Python or other object-oriented languages, SQL, and NoSQL databases, Container Orchestration services including Docker and Kubernetes, and a variety of Azure tools and services Be a role model and mentor, helping to coach and strengthen the technical expertise and know-how of our engineering and product community. Influence and educate executives Consistently share best practices and improve processes within and across teams Analyze cost and forecast, incorporating them into business plans Determine and support resource requirements, evaluate operational processes, measure outcomes to ensure desired results, and demonstrate adaptability and sponsoring continuous learning Qualifications Exemplary ability to design, perform experiments, and influence engineering direction and product roadmap Experience partnering with engineering teams and transferring research to production Extensive experience in leading and building full-stack application and service development, with a strong focus on SAAS products / platforms. Proven expertise in designing and developing microservices using C#, gRPC, Python, Django, Kafka, and Apache Spark, with a deep understanding of both API and event-driven architectures. Proven experience designing and delivering highly-resilient event-driven and messaging based solutions at scale with minimal latency. Deep hands-on experience in building complex SAAS systems in large scale business focused systems, with great knowledge on Docker and Kubernetes Fluency and Specialization with at least two modern OOP languages such as C#, Java, C++, or Python including object-oriented design Great understanding of open-source databases like MySQL, PostgreSQL, etc. And strong foundation with No-SQL databases like Cosmos, Cassandra. Apache Trino etc. In-depth knowledge of CS data structures and algorithms Ability to excel in a fast-paced, startup-like environment Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication) Experience with Micro-services oriented architecture and extensible REST APIs Experience building the architecture and design (architecture, design patterns, reliability, and scaling) of new and current systems Experience in implementing security protocols across services and products: Understanding of Active Directory, Windows Authentication, SAML, OAuth Fluency in DevOps Concepts, Cloud Architecture, and Azure DevOps Operational Framework Experience in leveraging PowerShell scripting Experience in existing Operational Portals such as Azure Portal Experience with application monitoring tools and performance assessments Experience in Azure Network (Subscription, Security zoning, etc.) Experience 10+ years full-stack development experience (C#/Java/Python/GO), with expertise in client-side and server-side frameworks. 8+ years of experience with architecture and design 6+ years of experience in open-source frameworks 4+ years of experience with AWS, GCP, Azure, or another cloud service Education Bachelorโs degree in Computer Science, Information Systems, or equivalent education or work experience Annual Salary $115,000.00 - $260,000.00 The above annual salary range is a general guideline. Multiple factors are taken into consideration to arrive at the final hourly rate/ annual salary to be offered to the selected candidate. Factors include, but are not limited to, the scope and responsibilities of the role, the selected candidateโs work experience, education and training, the work location as well as market and business considerations. At this time, GEICO will not sponsor a new applicant for employment authorization for this position. Benefits: As an Associate, youโll enjoy our Total Rewards Program* to help secure your financial future and preserve your health and well-being, including: Premier Medical, Dental and Vision Insurance with no waiting period** Paid Vacation, Sick and Parental Leave 401(k) Plan Tuition Reimbursement Paid Training and Licensures *Benefits may be different by location. Benefit eligibility requirements vary and may include length of service. **Coverage begins on the date of hire. Must enroll in New Hire Benefits within 30 days of the date of hire for coverage to take effect. The equal employment opportunity policy of the GEICO Companies provides for a fair and equal employment opportunity for all associates and job applicants regardless of race, color, religious creed, national origin, ancestry, age, gender, pregnancy, sexual orientation, gender identity, marital status, familial status, disability or genetic information, in compliance with applicable federal, state and local law. GEICO hires and promotes individuals solely on the basis of their qualifications for the job to be filled. GEICO reasonably accommodates qualified individuals with disabilities to enable them to receive equal employment opportunity and/or perform the essential functions of the job, unless the accommodation would impose an undue hardship to the Company. This applies to all applicants and associates. GEICO also provides a work environment in which each associate is able to be productive and work to the best of their ability. We do not condone or tolerate an atmosphere of intimidation or harassment. We expect and require the cooperation of all associates in maintaining an atmosphere free from discrimination and harassment with mutual respect by and for all associates and applicants. For more than 75 years, GEICO has stood out from the rest of the insurance industry! We are one of the nation's largest and fastest-growing auto insurers thanks to our low rates, outstanding service and clever marketing. We're an industry leader employing thousands of dedicated and hard-working associates. As a wholly owned subsidiary of Berkshire Hathaway, we offer associates training and career advancement in a financially stable and rewarding workplace. Opportunities for Students & Grads Learn more about GEICO Learn more about GEICO Diversity and Inclusion Learn more about GEICO Benefits \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Docker, DevOps, Education, Cloud, API, Senior and Engineer jobs that are similar:\n\n
$47,500 — $97,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMD Chevy Chase (Office) - JPS
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Cloudstack Engineer Public Cloud Scalability Team
\nCloudstack Engineer - Public Cloud Scalability Team\n\nOur office is based in Amsterdam, but remote work within the EU is available. We also offer relocation to the Netherlands. \n\nProduct Engineering at Leasewebโฏ \n\nOur team of approximately 90 engineers is working in small scrum teams. Each team has end to end responsibility for a specific product or part of our architecture. We work on a remote-first basis, coming together in person at our Amsterdam headquarters twice a year.\n\nOur organizational structure is flat, placing a high value on independence and entrepreneurship. The atmosphere is informal and relaxed, creating a highly motivating work environment in which you will work with some of the most inspiring colleagues in the industry.\n\nWhat is the role about? \n\nIn this role we are looking for a highly experienced developer with a true DevOps mentality and skillset. You will be collaborating with and contributing to the Apache CloudStack project. Since we are a provider of hosting infrastructure deep knowledge about Linux and Networking is key. Supported by your team, we expect a self-organizing and independent professional that will take the lead in running and scaling our CloudStack deployments. From diving into software bugs and reproducing customer problems to proposing and building sustainable solutions, in order for Leaseweb to provide scalable Public Cloud services.\n\nYou will be working with a team of DevOps engineers. A group with diverse expertise and highly curious minds who are excited about the challenges to build and operate Leasewebโs Public Cloud. Our objective is to build reliable platforms and interfaces providing trustworthy endpoints for users to integrate with, running a standardized stack, easily maintained, and fully autonomous for users through our API and Customer Portal.\n\nKey responsibilities:\n\n\n* Maintaining close collaboration with the Apache CloudStack community on the CloudStack project.\n\n* Developing and supporting the Apache CloudStack project.\n\n* Together with your team you will maintain Leasewebโs CloudStack deployments, both operationally and in software improvements\n\n* Working with the team to resolve issues that customers face with CloudStack, by solving bugs and introducing features\n\n* Participation in the on-duty rotation schedule\n\n\n\n\nRequirements:\n\n\n* Understanding of the Apache CloudStack opensource project.\n\n* Extensive experience with Java development in a cloud hosting context.\n\n* Experience with Python for automation and testing purpose\n\n* Knowledge of the Linux operating systems, preferably Ubuntu. Experience with Shell/Bash is an advantage.\n\n* Excellent knowledge of virtualization technologies (KVM, QEMU and libvirt) is required.\n\n* Knowledge and experience with Networking and Storage use and automation will be a big advantage.\n\n* Love teamwork, good planning skills, logical thinking skills, problem-solving skills and have an eye for detail.\n\n* Experience with continuous integration tools such as Jenkins is a plus.\n\n* Experience with configuration management systems like Chef is a plus.\n\n* It would be an advantage to have experience on Git/ Grafana/ Prometheus/ Kubernetes/ Docker is a plus.\n\n\n\n\nBenefits include \n\n\n* Participation in annual company bonus scheme and company pension\n\n* Internet allowance and travel allowance\n\n* Working from home policy \n\n* Lease bike plan\n\n* 25 days of paid time off (and the option to buy or sell up to 5 more days) \n\n* Free lunch, parking, and fresh fruit provided when in the office \n\n* Attractive relocation packages and an agency that takes care of the entire visa process\n\n* Access to the Leaseweb Academy, a personalized learning platform offering a variety of studies, (Dutch) courses, and trainings \n\n* Fun events year-round โ from virtual pub quizzes to summer parties, company runs, quarterly hackathons and much more \n\n* Monthly after work drinks\n\n* A multicultural work environment (our colleagues are from over 60 countries!) in a company where you can truly make a difference \n\n\n\n\nReady for the next step? \n\nIf youโd like to apply, please do so online. To learn more about us, follow us on LinkedIn or Instagram to get an inside look at life at Leaseweb. For questions, please reach out to Danisha Ardilla Talent Acquisition Specialist, at: [email protected]\n\nWe directly source all candidates โ any unsolicited profiles received from recruitment agencies will be treated as direct applications. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Travel, DevOps, Java, Cloud, API and Engineer jobs that are similar:\n\n
$55,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nAmsterdam, North Holland, Netherlands
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nGRAIL is a healthcare company whose mission is to detect cancer early, when it can be cured. GRAIL is focused on alleviating the global burden of cancer by developing pioneering technology to detect and identify multiple deadly cancer types early. The company is using the power of next-generation sequencing, population-scale clinical studies, and state-of-the-art computer science and data science to enhance the scientific understanding of cancer biology, and to develop its multi-cancer early detection blood test. GRAIL is headquartered in Menlo Park, CA with locations in Washington, D.C., North Carolina, and the United Kingdom. GRAIL, LLC is a wholly-owned subsidiary of Illumina, Inc. (NASDAQ:ILMN). For more information, please visit www.grail.com.\n\n\nAre you a champion of automation interested in using your talents for optimizing processes that will make an impact on the fight against cancer? If so, join GRAIL on the Data Integration team in Research! \n\n\nGRAIL is seeking a Staff Data Engineer to join our team to support the growing data needs of GRAILโs clinical and research activities. You will leverage your expertise in automation and data engineering to ensure our scientific teams have the data they need to succeed. This role is pivotal in advancing GRAIL's mission by enhancing our data infrastructure and contributing to our early cancer detection efforts.\n\n\n\nRESPONSIBILITIES\n* Be a part of a highly collaborative team that focuses on delivering value to cross-functional partners by designing, deploying, and automating secure, efficient, and scalable data infrastructure and tools, reducing manual efforts and streamlining operations.\n* Help model Grail data and ensure that it follows FAIR principles (findable, accessible, interoperable and reusable).\n* Drive the design, deployment, and automated delivery of data infrastructure, standardized data models, datasets, and tools.\n* Integrate automated testing and release processes to improve the quality and velocity of software and data deliveries.\n* Collaborate with cross-functional teams, from Research to Clinical Lab Operations to Software Engineering to provide comprehensive data solutions from conception to delivery. \n* Ensure all software and data meet high standards for quality, clinical compliance, and privacy.\n* Mentor fellow engineers and scientists, promoting best practices in software and data engineering.\n\n\n\nPREFERRED EXPERIENCE\n* B.S. / M.S. in a quantitative field (e.g., Computer Science, Engineering, Mathematics, Physics, Computational Biology) with at least 8 years of related industry experience, or Ph.D. with at least 5 years of related industry experience.\n* Extensive experience with relational databases, data modeling principles, data pipeline tools and workflow engines (e.g., SQL, DBT, Apache Airflow, AWS GLUE, Spark.\n* Extensive experience with DevOps practices, including CI/CD pipelines, containerized deployment (e.g., Kubernetes), and infrastructure-as-code (e.g., Terraform).\n* Experience with supporting data science / machine learning data pipelines, preferably in the context of analysis of biological data.\n* Experience in developing data pipelines using scalable cloud-based data warehouses / data lakes on AWS, Azure, or GCP.\n* Solid programming skills in object-oriented and/or functional programming paradigms. \n* Ability to embrace uncertainty, navigate ambiguity, and collaborate with product teams and stakeholders to refine requirements and drive towards clear engineering objectives and designs.\n* A commitment to constructive dialogue, both in giving and receiving critical feedback, to foster an environment of continuous improvement.\n\n\n\nHIGHLY WELCOME EXPERIENCE\n* Prior industry experience in the healthcare, biotech, or life sciences industry, especially in the context of next-generation sequencing.\n* Experience working in a regulated environment (e.g., FDA, CLIA, GDPR).\n* Proficiency in Python, and R.\n* Experience building microservices and web applications.\n\n\n\n\n\nThe estimated, full-time, annual base pay scale for this position is $180,000 - $202,000. Actual base pay will consider skills, experience, and location. \n\n\nBased on the role, colleagues may be eligible to participate in an annual bonus plan tied to company and individual performance, or an incentive plan. We also offer a long-term incentive plan to align company and colleague success over time.\n\n\nIn addition, GRAIL offers a progressive benefit package, including flexible time-off, a 401k with a company match, and alongside our medical, dental, vision plans, carefully selected mindfulness offerings.\n\n\nGRAIL is an Equal Employment Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status. We will reasonably accommodate all individuals with disabilities so that they can participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. GRAIL maintains a drug-free workplace. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps and Engineer jobs that are similar:\n\n
$60,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Diego, CA
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Senior Machine Learning Engineer Data Solutions
\nApply now for a career that puts wellbeing first!\n\nGET TO KNOW US\n\nWellhub (formerly Gympass*) is a corporate wellness platform that connects employees to the best partners for fitness, mindfulness, therapy, nutrition, and sleep, all included in one subscription designed to cost less than each individual partner. Founded in 2012 and headquartered in NYC, we have a growing global team in 11 countries. At Wellhub, you have the opportunity to build a career in a high-growth tech company that places wellbeing at the foundation of its culture, and contribute to making every company a wellness company.\n\n*Big news: Gympass is now Wellhub!\nWe are thrilled to announce our rebranding as Wellhub, marking a significant milestone in our journey. This transformation reflects our evolution from a โpass for gymsโ to a comprehensive employee wellbeing solution. With our refreshed identity, we are poised to embark on an exciting new chapter of growth and expansion. We are elevating our offerings, including a completely new app experience and an expanded network of wellbeing partners. Learn more about it here.\n\nTHE OPPORTUNITY\n\nWe are hiring a Senior Machine Learning Engineer to our Data Solutions team in Brazil! \n\nThe Data department at Wellhub contributes to our data democratization mission. We are responsible for empowering every team by providing a scalable and reliable Data and ML platform to provide an efficient journey for every Data Practitioner and help elevate business outcomes from our data.\n\nWe tackle large-scale production challenges using software engineering principles, leveraging cutting-edge technologies such as Kubernetes, Trino, Spark, Kafka, Airflow, Flink, MLFlow, and more. Our infrastructure is entirely cloud-based, offering a dynamic and innovative environment for data-driven solutions.\nYOUR IMPACT\n\n\n* Develop and maintain a scalable platform to streamline the development, deployment and management of machine/deep learning models;\n\n* Design and implement data architectures and pipelines to solve complex business challenges;\n\n* Ensure engineering best practices to create scalable and reliable data solutions\n\n* Live the mission: inspire and empower others by genuinely caring for your own wellbeing and your colleagues. Bring wellbeing to the forefront of work, and create a supportive environment where everyone feels comfortable taking care of themselves, taking time off, and finding work-life balance.\n\n\n\n\nWHO YOU ARE\n\n\nYou have created robust and scalable Data and/or Machine Learning architectures and frameworks;\n\nProficient in at least one programming language (e.g. Java, Python, Scala).\n\nYou can collaborate with different teams to understand needs and develop effective Data solutions;\n\nYou are motivated to design, implement, and maintain software solutions within the Data realm;\n\nDemonstrated experience in building and maintaining MLOps tools and pipelines for model development, deployment, and automation;\n\nUnderstanding of Machine and Deep Learning principles and frameworks (e.g. TensorFlow, PyTorch, sci-kit learn);\n\nUnderstanding of data engineering and architecture principles and technologies (e.g. Apache Spark, Hadoop, Apache Flink);\n\nYou need to be able to articulate ideas clearly when speaking to groups in English.\n\n\n\n\nThe following will be considered a plus:\n\n\nHands-on experience with cloud platforms, particularly AWS, including services like SageMaker, EC2, and S3;\n\nFamiliarity with containerization technologies such as Docker/Kubernetes/Crossplane;\n\nAbility to deploy and manage machine learning models within Kubernetes clusters;\n\nExperience with DevOps practices, including CI/CD pipelines and infrastructure as code (IaC) and logging tools such as Prometheus, Grafana, or AWS CloudWatch.\n\n\n\n\nWe recognize that individuals approach job applications differently. We strongly encourage all aspiring applicants to go for it, even if they don't match the job description 100%. We welcome your application and will be delighted to explore if you could be a great fit for our team. For this specific role, please note that experience working with Java, or Python, and understanding of Machine and Deep Learning principles and frameworks, and also an advanced level of English are mandatory requirements.\n\nWHAT WE OFFER YOU \n\nWe're a wellness company that is committed to the health and well-being of our employees. Our flexible program allows you to customize your benefits, according to your needs!\n\nOur benefits include:\n\nWELLNESS: Health, dental, and life insurance.\n\nFLEXIBLE WORK: Choose when and where you work. For most, this will be a hybrid office/remote structure but can vary depending on the needs of the role and employee preferences. We offer all employees a home office stipend and a monthly flexible work allowance to help cover the costs of working from home.\n\nFLEXIBLE SCHEDULE: Wellhubbers and their leaders can make the best decisions for their scope. This includes flexibility to adjust their working hours based on their personal schedule, time zone, and business needs\n\nWELLHUB: We believe in our mission and encourage our employees and their families to take care of their wellbeing too. Access onsite gyms and fitness studios, digital fitness programs, and online wellness resources for meditation, nutrition, mental health support, and more. You will receive the Gold plan at no cost, and other premium plans will be significantly discounted.\n\nPAID TIME OFF: We know how important it is that our employees take time away from work to recharge. \n\nVacations after 6 months and 3 days off per year + 1 day off for each year of tenure (up to 5 additional days) + extra day off for your birthday.\n\nPAID PARENTAL LEAVE: Welcoming a new child is one of the most special moments in your life and we want our employees to take the time to be present and enjoy their growing family. We offer 100% paid parental leave to all new parents and extended maternity leave\n\nCAREER GROWTH: Outstanding opportunities for personal and career growth. That means we maintain a growth mindset in everything we do and invest deeply in employee development.\n\nCULTURE: An exciting and supportive atmosphere with ambitious people from around the world! Youโll partner with global colleagues and share in the success of a high-growth technology company disrupting the health and wellness space. Our value-based culture of trust, flexibility, and integrity makes this possible every day. Find more info on our careers page!\n\nAnd to get a glimpse of Life at Wellhubโฆ Follow us on Instagram @wellhublife and LinkedIn!\nDiversity, Equity, and Belonging at Wellhub\n\nWe aim to create a collaborative, supportive, and inclusive space where everyone knows they belong.\n\nWellhub is committed to creating a diverse work environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, sex, gender identity or expression, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status, or any other basis covered by appropriate law.\n\nQuestions on how we treat your personal data? See our Job Applicant Privacy Notice.\n\n\n#LI-REMOTE\n\n \n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSรฃo Paulo, Sรฃo Paulo, Brazil
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe currently seeking a Senior Data Engineer with 5-7 yearsโ experience. The ideal candidate would have the ability to work independently within an AGILE working environment and have experience working with cloud infrastructure leveraging tools such as Apache Airflow, Databricks, DBT and Snowflake. A familiarity with real-time data processing and AI implementation is advantageous. \n\n\n\nResponsibilities:\n* Design, build, and maintain scalable and robust data pipelines to support analytics and machine learning models, ensuring high data quality and reliability for both batch & real-time use cases.\n* Design, maintain, optimize data models and data structures in tooling such as Snowflake and Databricks. \n* Leverage Databricks for big data processing, ensuring efficient management of Spark jobs and seamless integration with other data services.\n* Utilize PySpark and/or Ray to build and scale distributed computing tasks, enhancing the performance of machine learning model training and inference processes.\n* Monitor, troubleshoot, and resolve issues within data pipelines and infrastructure, implementing best practices for data engineering and continuous improvement.\n* Diagrammatically document data engineering workflows. \n* Collaborate with other Data Engineers, Product Owners, Software Developers and Machine Learning Engineers to implement new product features by understanding their needs and delivery timeously. \n\n\n\nQualifications:\n* Minimum of 3 years experience deploying enterprise level scalable data engineering solutions.\n* Strong examples of independently developed data pipelines end-to-end, from problem formulation, raw data, to implementation, optimization, and result.\n* Proven track record of building and managing scalable cloud-based infrastructure on AWS (incl. S3, Dynamo DB, EMR). \n* Proven track record of implementing and managing of AI model lifecycle in a production environment.\n* Experience using Apache Airflow (or equivalent) , Snowflake, Lucene-based search engines.\n* Experience with Databricks (Delta format, Unity Catalog).\n* Advanced SQL and Python knowledge with associated coding experience.\n* Strong Experience with DevOps practices for continuous integration and continuous delivery (CI/CD).\n* Experience wrangling structured & unstructured file formats (Parquet, CSV, JSON).\n* Understanding and implementation of best practices within ETL end ELT processes.\n* Data Quality best practice implementation using Great Expectations.\n* Real-time data processing experience using Apache Kafka Experience (or equivalent) will be advantageous.\n* Work independently with minimal supervision.\n* Takes initiative and is action-focused.\n* Mentor and share knowledge with junior team members.\n* Collaborative with a strong ability to work in cross-functional teams.\n* Excellent communication skills with the ability to communicate with stakeholders across varying interest groups.\n* Fluency in spoken and written English.\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, Senior, Junior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nMemora Health works with leading healthcare organizations to make complex care journeys simple for patients and clinicians so that care is more accessible, actionable, and always-on. Our team is rapidly growing as we expand our programs to reach more health systems and patients, and we are excited to bring on a Senior Data Engineer. \n\nIn this role, you will have the responsibility of driving the architecture, design and development of our data warehouse and analytics solutions, alongside APIs that allow other internal teams to interact with our data. The ideal candidate will be able to collaborate effectively with Memoraโs Product Management, Engineering, QA, TechOps and business stakeholders.\n\nThis role will work closely with the cross-functional teams to understand customer pain points and identify, prioritize, and implement maintainable solutions. Ideal candidates will be driven not only by the problem we are solving but also by the innovative approach and technology that we are applying to healthcare - looking to make a significant impact on healthcare delivery. Weโre looking for someone with exceptional curiosity and enthusiasm for solving hard problems.\n\n Primary Responsibilities:\n\n\n* Collaborate with Technical Lead, fellow engineers, Product Managers, QA, and TechOps to develop, test, secure, iterate, and scale complex data infrastructure, data models, data pipelines, APIs and application backend functionality.\n\n* Work closely with cross-functional teams to understand customer pain points and identify, prioritize, and implement maintainable solutions\n\n* Promote product development best practices, supportability, and code quality, both through leading by example and through mentoring other software engineers\n\n* Manage and pare back technical debts and escalate to Technical Lead and Engineering Manager as needed\n\n* Establish best practices designing, building and maintaining data models.\n\n* Design and develop data models and transformation layers to support reporting, analytics and AI/ML capabilities.\n\n* Develop and maintain solutions to enable self-serve reporting and analytics.\n\n* Build robust, performant ETL/ELT data pipelines.\n\n* Develop data quality monitoring solutions to increase data quality standards and metrics accuracy.\n\n\n\n\nQualifications (Required):\n\n\n* 3+ years experience in shipping, maintaining, and supporting enterprise-grade software products\n\n* 3+ years of data warehousing / analytics engineering\n\n* 3+ years of data modeling experience\n\n* Disciplined in writing readable, testable, and supportable code in JavaScript, TypeScript, Node.js (Express), Python (Flask, Django, or FastAPI), or Java.\n\n* Expertise writing, and consuming RESTful APIs\n\n* Experience with relational or NoSQL databases (PostgreSQL, MySQL, MongoDB, Redis, etc.)\n\n* Experience with Data Warehouses (BigQuery, Snowflake, etc.)\n\n* Experience with analytical and reporting tools, such as Looker or Tableau\n\n* Inclination toward test-driven development and test automation\n\n* Experience with scrum methodology\n\n* Excels in mentoring junior engineers\n\n* B.S. in Computer Science or other quantitative fields or related work experience\n\n\n\n\nQualifications (Bonus):\n\n\n* Understanding of DevOps practices and technologies (Docker, Kubernetes, CI / CD, test coverage and automation, branch and release management)\n\n* Experience with security tooling in SDLC and Security by Design principles\n\n* Experience with observability and APM tooling (Sumo Logic, Splunk, Sentry, New Relic, Datadog, etc.)\n\n* Experience with an integration framework (Mirth Connect, Mule ESB, Apache Nifi, Boomi, etc..)\n\n* Experience with healthcare data interoperability frameworks (FHIR, HL7, CCDA, etc.)\n\n* Experience with healthcare data sources (EHRs, Claims, etc.)\n\n* Experience working at a startup\n\n\n\n\n\n\nWhat You Get:\n\n\n* An opportunity to work on a rapidly scaling care delivery platform, engaging thousands of patients and care team members and growing 2-3x annually\n\n* Enter a highly collaborative environment and work on the fun challenges of scaling a high-growth startup\n\n* Work alongside world-class clinical, operational, and technical teams to build and scale Memora\n\n* Shape how leading health systems and plans think about modernizing the care delivery experience for their patients and care teams\n\n* Improve the way care is delivered for hundreds of thousands of patients\n\n* Gain deep expertise about healthcare transformation and direct customer exposure with the countryโs most innovative health systems and plans\n\n* Ownership over your success and the ability to significantly impact the growth of our company\n\n* Competitive salary and equity compensation with benefits including health, dental, and vision coverage, flexible work hours, paid maternity/paternity leave, bi-annual retreats, Macbook, and a 401(k) plan\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, NoSQL, Senior, Engineer and Backend jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Prominent Edge is seeking highly talented and passionate Software Engineers to join our phenomenal software development team. We are a 100% remote company, so successful candidates must be highly self-motivated and capable of working independently. We do, however, share knowledge and talk across projects and topics on a daily basis. Since we are primarily a software engineering services company, you'll have exposure to a variety of technologies as opposed to having to work on one product or stack for too long. \n \nWe've been a 100% remote company before it was cool to be remote. We hire the best talent and always strive to exceed expectations. We leverage best-of-breed open source technologies to provide our customers with innovative user centric solutions. We invest in our company culture and make sure that we have fun. We also have exceptional benefits such as free quality healthcare for your entire family. If this sounds like the type of environment in which you would thrive, and you qualify for the position below, please apply -- weโd love to hear from you! Visit our careers page. (https://prominentedge.com/careers) to learn more.\n\n**Required Skills**\n* 5+ years experience as a Full-Stack Software Engineer, experienced working in an Agile development environment\n* Experience leading project teams through the full development life cycle, including requirements analysis, architecture, design, coding, testing, and delivery of solutions\n* Front-end development skills using modern JavaScript frameworks, such as ReactJS/React Native, Angular/AngularJS, or Vue\n* Backend development skills using server-side frameworks, such as NodeJS/Express, Flask, Django, or Spring\n* Database skills (e.g., Elasticsearch, Postgres/PostGIS, SQLite, MySQL, SQL Server, MongoDB, Redis, etc.)\n* Excellent interpersonal and communication skills (both written and oral)\n* Highly self-motivated and results-oriented team player\n* Unwavering integrity and commitment to excellence\n* BS degree in Computer Science or related field, or equivalent work experience\n\n**Additional Skills (โNice to Haveโ)**\n* Open source geospatial technologies, such as Mapbox GL, GeoServer, etc.\n* Data visualization using technologies such as D3, Kibana, etc\n* Containerization and container orchestration, preferably using Docker and Kubernetes\n* Cloud computing, especially using AWS services such as S3, RDS, SQS, EMR, or Kinesis \n* Serverless approaches, preferably using AWS Lambda and Serverless Framework\n* DevOps and Continuous Integration / Continuous Delivery (CI/CD), using technologies such as Jenkins or AWS CodeBuild\n* 3D game engine or 3D web experience, using technologies such as CesiumJS, WebGL, Unity, or Unreal\n* Advanced technologies (machine learning, computer vision, image processing, data mining, data analytics), using tools such as TensorFlow, PyTorch, or Apache Spark\n* Scrum Master\n* Active (or ability to obtain) Security Clearance\n* Advanced degree (MS or MBA)\n\n**Benefits**\n* Six weeks paid time off per year\n* Six percent 401(k) matching, vested immediately\n* Company-paid, low-deductible healthcare for the entire family\n* Straight-time overtime pay for all employees\n* Flex time (i.e., adjust your hours to fit your schedule)\n* Paid training, courses, and conferences\n* Laptop upgrades\n* Work from the comfort of your own home!\n* This organization participates in E-Verify \n\nPlease mention the words **BORING JELLY ICON** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$100,000 — $180,000/year\n
\n\n#Location\nWorldwide
# How do you apply?\n\nFill out the application at the link and apply online or give me a call at (703) 801-0976!
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Radical candor? More like radical gander, right?\n\nOk, enough goose puns for the moment, but now that weโve got your attention, allow us to tell you more about this company called GooseChase and why might you want to join us!\n\nYou love an app that makes you smile, right? Well our product is a lot of fun. And even though our origins are that of a humble scavenger hunt app, weโve now evolved (hatched?) into a super-flexible platform used by millions to create all sorts of experiences! Itโs all the best parts of a scavenger hunt, with a delightful twist to keep you coming back for more.\n\nWith all that fun and flexibility, weโve been growing like crazy. For teachers in the classroom, weโre the new โTV cartโ, aka the learning tool that students legitimately look forward to. For todayโs workplaces, virtual or otherwise, we help the team connect with each other to the point where they want to come into (or logon) to work. And for local cities and towns, we bring families together in a way that truly builds a sense of community.\n\nBut weโre also unique in that we donโt have any investors and, as a result, are able to put our people before profit. Seriously. We have a profit sharing program specifically for this reason! It also means we grow at the pace we want to and put the time into making this the best possible place to work. When we do something, itโs because itโs the best thing for our team and our customers, no matter what. Weโve actually been fully remote from the very beginning, because we wanted our people to have the flexibility to live wherever and however they wanted!\n\n**So what exactly is this job?**\n\nAs a full-stack engineer, youโll have the opportunity to architect the tools and systems that serve as the connective tissue for our infrastructure, to work seamlessly within our front-end and back-end systems, and to help enable all of our teams to move faster and provide our customers with even more exciting experiences. This includes acting as a liaison between the product team and others such as sales, customer experience, and operations.\n\nFrom a day to day perspective, youโll be a core member of our cross-functional product development team, focused on developing cross-system integrations, admin tools, automated marketing systems, testing platforms, and more. Within our product team, there are no silos or artificial barriers, meaning youโll be working closely with other engineers and designers. Whatever will help give our customers the best experience possible!\n\nWhile by no means an exhaustive list, here are some of the things a successful candidate will have experience in:\n\n* Modern front-end web technologies such as Webpack, Express.js, and React.js\n* Using server side rendering alongside single page applications\n* Server / scripting technologies such as Node.js and Python\n* Core web technologies - HTTP, HTML, CSS, JavaScript\n* Relational databases such as PostgreSQL and MySQL\n* Distributed computing systems such as RabbitMQ, Apache Kafka, Redis\n* Open source instrumentation systems such as Prometheus, Grafana, and OpenTelemetry\n* Version control - e.g. Git, Mercurial, SVN\n* DevOps technologies such as Docker, Kubernetes, and Helm\n* Managing and deploying services on the public cloud - e.g. AWS, Google Cloud, Microsoft Azure\n* CI/CD technologies such as CircleCI and Jenkins\n\nLooking at how we work, our methodologies are:\n* Continuous integration using CircleCI to empower developer autonomy and shorten development cycles\n* Agile, with two week sprint cycles\n* Daily async check-ins - stay connected to the team, but without zombie standups\n* Project management in Asana, with our โofficeโ taking place on Slack & Zoom - we have a very strong gif game\n\n**How do you know if this might be for you?**\n\nAt GooseChase youโll be working alongside a team of highly motivated, world-class engineers with tons of opportunities for learning, growth, and mentorship. Our product team works closely together, so we are extremely selective about who we hire to ensure the calibre of our engineering talent remains high. Be prepared to bring your โAโ game!\n\nWe understand that relevant experience comes in all shapes and sizes and the ability to do the job is all that matters. With that in mind, we arenโt going to put together a generic list of all the requirements that weโre looking for with this job, however, there are certain things we are looking for - specifically:\n\n* Have you been able to collaborate & communicate successfully with others in a cross-functional team?\n* Do you have very high standards for your work and desire to work with other talented people?\n* Have you succeeded at building effective tools and software integrations in the past?\n* Do you bring a depth of expertise in security and performance best practices?\n* Are you able to take UX and design into account when developing front end applications?\n* Are you comfortable architecting APIs as well as developing corresponding documentation?\n* Are you comfortable working within multiple projects and technologies simultaneously with minimal supervision?\n* Are you based in the GMT-4 to GMT-8 time zones (North America), so you can work closely with our distributed product team?\n* Can you get down with an uncomfortable amount of goose puns?\n\nOk the last one isnโt super work related, but we honestly do have a lot of goose puns. Itโs one of our favourite parts of our culture!\n\nWe do things differently here. Weโre all about fun, but also making an impact. We care deeply about working with amazing people, and have set up our company culture specifically for that - our compensation is competitive, our work environment is autonomous and collaborative, and our emphasis is on learning and growth. Put simply, itโs the type of company we actually want to work at ourselves!\n\nSo, this is us, standing in front of you, asking you to join us. \n\n \n\nPlease mention the words **LIVE BUBBLE ENTRY** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$60,000 — $90,000/year\n
\n\n#Benefits\n
โฐ Async\n\n๐ฐ Profit sharing\n\n
\n\n#Location\nNorth America Timezones
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.