Government Employees Insurance Company is hiring a
Remote Staff Engineer
Our Senior Staff Engineer works with our Staff and Sr. Engineers to innovate and build new systems, improve and enhance existing systems and identify new opportunities to apply your knowledge to solve critical problems. You will lead the strategy and execution of a technical roadmap that will increase the velocity of delivering products and unlock new engineering capabilities. The ideal candidate is a self-starter that has deep technical expertise in their domain. Position Responsibilities As a Senior Staff Engineer, you will: Provide technical leadership to multiple areas and provide technical and thought leadership to the enterprise Collaborate across team members and across the tech organization to solve our toughest problems Develop and execute technical software development strategy for a variety of domains Accountable for the quality, usability, and performance of the solutions Utilize programming languages like C#, Java, Python or other object-oriented languages, SQL, and NoSQL databases, Container Orchestration services including Docker and Kubernetes, and a variety of Azure tools and services Be a role model and mentor, helping to coach and strengthen the technical expertise and know-how of our engineering and product community. Influence and educate executives Consistently share best practices and improve processes within and across teams Analyze cost and forecast, incorporating them into business plans Determine and support resource requirements, evaluate operational processes, measure outcomes to ensure desired results, and demonstrate adaptability and sponsoring continuous learning Qualifications Exemplary ability to design, perform experiments, and influence engineering direction and product roadmap Experience partnering with engineering teams and transferring research to production Extensive experience in leading and building full-stack application and service development, with a strong focus on SAAS products / platforms. Proven expertise in designing and developing microservices using C#, gRPC, Python, Django, Kafka, and Apache Spark, with a deep understanding of both API and event-driven architectures. Proven experience designing and delivering highly-resilient event-driven and messaging based solutions at scale with minimal latency. Deep hands-on experience in building complex SAAS systems in large scale business focused systems, with great knowledge on Docker and Kubernetes Fluency and Specialization with at least two modern OOP languages such as C#, Java, C++, or Python including object-oriented design Great understanding of open-source databases like MySQL, PostgreSQL, etc. And strong foundation with No-SQL databases like Cosmos, Cassandra. Apache Trino etc. In-depth knowledge of CS data structures and algorithms Ability to excel in a fast-paced, startup-like environment Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication) Experience with Micro-services oriented architecture and extensible REST APIs Experience building the architecture and design (architecture, design patterns, reliability, and scaling) of new and current systems Experience in implementing security protocols across services and products: Understanding of Active Directory, Windows Authentication, SAML, OAuth Fluency in DevOps Concepts, Cloud Architecture, and Azure DevOps Operational Framework Experience in leveraging PowerShell scripting Experience in existing Operational Portals such as Azure Portal Experience with application monitoring tools and performance assessments Experience in Azure Network (Subscription, Security zoning, etc.) Experience 10+ years full-stack development experience (C#/Java/Python/GO), with expertise in client-side and server-side frameworks. 8+ years of experience with architecture and design 6+ years of experience in open-source frameworks 4+ years of experience with AWS, GCP, Azure, or another cloud service Education Bachelorโs degree in Computer Science, Information Systems, or equivalent education or work experience Annual Salary $115,000.00 - $260,000.00 The above annual salary range is a general guideline. Multiple factors are taken into consideration to arrive at the final hourly rate/ annual salary to be offered to the selected candidate. Factors include, but are not limited to, the scope and responsibilities of the role, the selected candidateโs work experience, education and training, the work location as well as market and business considerations. At this time, GEICO will not sponsor a new applicant for employment authorization for this position. Benefits: As an Associate, youโll enjoy our Total Rewards Program* to help secure your financial future and preserve your health and well-being, including: Premier Medical, Dental and Vision Insurance with no waiting period** Paid Vacation, Sick and Parental Leave 401(k) Plan Tuition Reimbursement Paid Training and Licensures *Benefits may be different by location. Benefit eligibility requirements vary and may include length of service. **Coverage begins on the date of hire. Must enroll in New Hire Benefits within 30 days of the date of hire for coverage to take effect. The equal employment opportunity policy of the GEICO Companies provides for a fair and equal employment opportunity for all associates and job applicants regardless of race, color, religious creed, national origin, ancestry, age, gender, pregnancy, sexual orientation, gender identity, marital status, familial status, disability or genetic information, in compliance with applicable federal, state and local law. GEICO hires and promotes individuals solely on the basis of their qualifications for the job to be filled. GEICO reasonably accommodates qualified individuals with disabilities to enable them to receive equal employment opportunity and/or perform the essential functions of the job, unless the accommodation would impose an undue hardship to the Company. This applies to all applicants and associates. GEICO also provides a work environment in which each associate is able to be productive and work to the best of their ability. We do not condone or tolerate an atmosphere of intimidation or harassment. We expect and require the cooperation of all associates in maintaining an atmosphere free from discrimination and harassment with mutual respect by and for all associates and applicants. For more than 75 years, GEICO has stood out from the rest of the insurance industry! We are one of the nation's largest and fastest-growing auto insurers thanks to our low rates, outstanding service and clever marketing. We're an industry leader employing thousands of dedicated and hard-working associates. As a wholly owned subsidiary of Berkshire Hathaway, we offer associates training and career advancement in a financially stable and rewarding workplace. Opportunities for Students & Grads Learn more about GEICO Learn more about GEICO Diversity and Inclusion Learn more about GEICO Benefits \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Docker, DevOps, Education, Cloud, API, Senior and Engineer jobs that are similar:\n\n
$47,500 — $97,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMD Chevy Chase (Office) - JPS
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nJoin SADA India as a Senior Data Engineer, Enterprise Support service!\n\nYour Mission \n\nAs a Sr. Data Engineer on the Enterprise Support service team at SADA, you will reduce customer anxiety about running production workloads in the cloud by implementing and iteratively improving observability and reliability. You will have the opportunity to engage with our customers in a meaningful way by defining, measuring, and improving key business metrics; eliminating toil through automation; inspecting code, design, implementation, and operational procedures; enabling experimentation by helping create a culture of ownership; and winning customer trust through education, skill sharing, and implementing recommendations. Your efforts will accelerate our customersโ cloud adoption journey and we will be with them through the transformation of their applications, infrastructure, and internal processes. You will be part of a new social contract between customers and service providers that demands shared responsibility and accountability: our partnership with our customers will ensure we are working towards a common goal and share a common fate.\n\nThis is primarily a customer-facing role. You will also work closely with SADAโs Customer Experience team to execute their recommendations to our customers, and with Professional Services on large projects that require PMO support.\n\nPathway to Success \n\n#MakeThemRave is at the foundation of all our engineering. Our motivation is to provide customers with an exceptional experience in migrating, developing, modernizing, and operationalizing their systems in the Google Cloud Platform.\n\nYour success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured bi-yearly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.\n\nAs you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks.\n\nExpectations\n\nCustomer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives.\n\nOnboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals.\n\nJob Requirements\n\nRequired Credentials:\n\n\n* Google Professional Data Engineer Certified or able to complete within the first 45 days of employment \n\n* A secondary Google Cloud certification in any other specialization\n\n\n\n\nRequired Qualifications: \n\n\n* 4+ years of experience in Cloud support\n\n* Experience in supporting customers preferably in 24/7 environments\n\n* Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)\n\n* Experience writing software in at least two or more languages such as Python, Java, Scala, or Go\n\n* Experience in building production-grade data solutions (relational and NoSQL)\n\n* Experience with systems monitoring/alerting, capacity planning, and performance tuning\n\n* Experience with BI tools like Tableau, Looker, etc will be an advantage\n\n* Consultative mindset that delights the customer by building good rapport with them to fully understand their requirements and provide accurate solutions\n\n\n\n\nUseful Qualifications:\n\n\n* \n\n\n* Mastery in at least one of the following domain areas:\n\n\n\n\n\n* \n\n\n* Google Cloud DataFlow: building batch/streaming ETL pipelines with frameworks such as Apache Beam or Google Cloud DataFlow and working with messaging systems like Pub/Sub, Kafka, and RabbitMQ; Auto scaling DataFlow clusters, troubleshooting cluster operation issues\n\n* Data Integration Tools: building data pipelines using modern data integration tools such as Fivetran, Striim, Data Fusion, etc. Must have hands-on experience configuring and integrating with multiple Data Sources within and outside of Google Cloud\n\n* Large Enterprise Migration: migrating entire cloud or on-prem assets to Google Cloud including Data Lakes, Data Warehouses, Databases, Business Intelligence, Jobs, etc. Provide consultations for optimizing cost, defining methodology, and coming up with a plan to execute the migration.\n\n\n\n\n\n\n\n\n\n* Experience with IoT architectures and building real-time data streaming pipelines\n\n* Experience operationalizing machine learning models on large datasets\n\n* Demonstrated leadership and self-direction -- a willingness to teach others and learn new techniques\n\n* Demonstrated skills in selecting the right statistical tools given a data analysis problem\n\n* Understanding of Chaos Engineering\n\n* Understanding of PCI, SOC2, and HIPAA compliance standards\n\n* Understanding of the principle of least privilege and security best practices\n\n* Understanding of cryptocurrency and blockchain technology\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nThiruvananthapuram, Kerala, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Senior AI Infra Engineer AI ML and Data Infrastructure
The Team\n\nAcross our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central team provides the support needed to push this work forward. \n\nThe Central team at CZI consists of our Finance, People & DEI, Real Estate, Events, Workplace, Facilities, Security, Brand & Communications, Business Systems, Central Operations, Strategic Initiatives, and Ventures teams. These teams provide strategic support and operational excellence across the board at CZI.\nThe Opportunity\n\nBy pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive solutions. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. Our technology team is already helping schools bring personalized learning tools to teachers and schools across the country. We are also supporting scientists around the world as they develop a comprehensive reference atlas of all cells in the human body, and are developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. \n\nThe AI/ML and Data Engineering Infrastructure organization works on building shared tools and platforms to be used across all of the Chan Zuckerberg Initiative, partnering and supporting the work of a wide range of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale. A person in this role will build these technology solutions and help to cultivate a culture of shared best practices and knowledge around core engineering.\nWhat You'll Do\n\n\n* Participate in the technical design and building of efficient, stable, performant, scalable and secure AI/ML and Data infrastructure engineering solutions.\n\n* Active hands-on coding working on our Deep Learning and Machine Learning models\n\n* Design and implement complex systems integrating with our large scale AI/ML GPU compute infrastructure and platform, making working across multiple clouds easier and convenient for our Research Engineers, ML Engineers, and Data Scientists. \n\n* Use your solid experience and skills in building containerized applications and infrastructure using Kubernetes in support of our large scale GPU Research cluster as well as working on our various heterogeneous and distributed AI/ML environments. \n\n* Collaborate with other team members in the design and build of our Cloud based AI/ML platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes.\n\n* Collaborate with our partners on data management solutions in our heterogeneous collection of complex datasets.\n\n* Help build tooling that makes optimal use of our shared infrastructure in empowering our AI/ML efforts with world class GPU Compute Cluster and other compute environments such as our AWS based services.\n\n\n\nWhat You'll Bring\n\n\n* BS or MS degree in Computer Science or a related technical discipline or equivalent experience\n\n* 5+ years of relevant coding experience\n\n* 3+ years of systems Architecture and Design experience, with a broad range of experience across Data, AI/ML, Core Infrastructure, and Security Engineering\n\n* Scaling containerized applications on Kubernetes or Mesos, including expertise with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes or Mesos. (Kubernetes preferred)\n\n* Proficiency with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, and experience with On-Prem and Colocation Service hosting environments\n\n* Proven coding ability with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala\n\n* Shown ability with a scripting language such as Python, PHP, or Ruby\n\n* AI/ML Platform Operations experience in an environment with challenging data and systems platform challenges - including large scale Kafka and Spark deployments (or their coralaries such as Pulsar, Flink, and/or Ray) as well as Workflow scheduling tools such as Apache Airflow, Dagster, or Apache Beam \n\n* MLOps experience working with medium to large scale GPU clusters in Kubernetes (Kubeflow), HPC environments, or large scale Cloud based ML deployments\n\n* Working knowledge of Nvidia CUDA and AI/ML custom libraries. \n\n* Knowledge of Linux systems optimization and administration\n\n* Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.\n\n* PyTorch, Karas, or Tensorflow experience a strong nice to have\n\n* HPC with and Slurm experience a strong nice to have\n\n\n\nCompensation\n\nThe Redwood City, CA base pay range for this role is $190,000 - $285,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside Redwood City are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.\nBenefits for the Whole You \n\nWeโre thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. \n\n\n* CZI provides a generous employer match on employee 401(k) contributions to support planning for the future.\n\n* Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs.\n\n* CZI Life of Service Gifts are awarded to employees to โlive the missionโ and support the causes closest to them.\n\n* Paid time off to volunteer at an organization of your choice. \n\n* Funding for select family-forming benefits. \n\n* Relocation support for employees who need assistance moving to the Bay Area\n\n* And more!\n\n\n\nCommitment to Diversity\n\nWe believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. Learn about our diversity, equity, and inclusion efforts. \n\nIf youโre interested in a role but your previous experience doesnโt perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.\n\nExplore our work modes, benefits, and interview process at www.chanzuckerberg.com/careers.\n\n#LI-Remote \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$42,500 — $82,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRedwood City, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nCAPCO POLAND \n\n*We are looking for Poland based candidate. The job is remote but may require some business trips.\n\nJoining Capco means joining an organisation that is committed to an inclusive working environment where youโre encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. Itโs important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table โ so weโd love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help.\n\nCapco Poland is a global technology and management consultancy specializing in driving digital transformation across the financial services industry. We are passionate about helping our clients succeed in an ever-changing industry.\n\nWe also are experts in focused on development, automation, innovation, and long-term projects in financial services. In Capco, you can code, write, create, and live at your maximum capabilities without getting dull, tired, or foggy.\n\nWe're seeking a skilled Mid Big Data Engineer to join our Team. The ideal candidate will be responsible for designing, implementing and maintaining scalable data pipelines and solutions on on-prem / migration / cloud projects for large scale data processing and analytics.\n\n \n\nTHINGS YOU WILL DO\n\n\n* Design, develop and maintain robust data pipelines using Scala or Python, Spark, Hadoop, SQL for batch and streaming data processing\n\n* Collaborate with cross-functional teams to understand data requirements and design efficient solutions that meet business needs \n\n* Optimize Spark jobs and data processing workflows for performance, scalability and reliability\n\n* Ensure data quality, integrity and security throughout the data lifecycle\n\n* Troubleshoot and resolve data pipeline issues in a timely manner to minimize downtime and impact on business operations\n\n* Stay updated on industry best practices, emerging technologies, and trends in big data processing and analytics\n\n* Document, design specifications, deployment procedures and operational guidelines for data pipelines and systems\n\n* Provide technical guidance and mentorship for new joiners\n\n\n\n\n \n\nTECH STACK: Python or Scala, OOP, Spark, SQL, Hadoop\n\nNice to have: GCP, Pub/Sub, Big Query, Kafka, Juniper, Apache NiFi, Hive, Impala, Cloudera, CI/CD\n\n \n\nSKILLS & EXPERIENCES YOU NEED TO GET THE JOB DONE\n\n\n* min. 3-4 years of experience as a Data Engineer/Big Data Engineer\n\n* University degree in computer science, mathematics, natural sciences, or similar field and relevant working experience\n\n* Excellent SQL skills, including advanced concepts\n\n* Very good programming skills in Python or Scala\n\n* Experience in Spark and Hadoop\n\n* Experience in OOP\n\n* Experience using agile frameworks like Scrum\n\n* Interest in financial services and markets\n\n* Nice to have: experience or knowledge with GCP \n\n* Fluent English communication and presentation skills\n\n* Sense of humor and positive attitude\n\n\n\n\n \n\nWHY JOIN CAPCO?\n\n\n* Employment contract and/or Business to Business - whichever you prefer\n\n* Possibility to work remotely\n\n* Speaking English on daily basis, mainly in contact with foreign stakeholders and peers\n\n* Multiple employee benefits packages (MyBenefit Cafeteria, private medical care, life-insurance)\n\n* Access to 3.000+ Business Courses Platform (Udemy)\n\n* Access to required IT equipment\n\n* Paid Referral Program\n\n* Participation in charity events e.g. Szlachetna Paczka\n\n* Ongoing learning opportunities to help you acquire new skills or deepen existing expertise\n\n* Being part of the core squad focused on the growth of the Polish business unit\n\n* A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients\n\n* A work culture focused on innovation and creating lasting value for our clients and employees\n\n\n\n\n \n\nONLINE RECRUITMENT PROCESS STEPS*\n\n\n* Screening call with the Recruiter\n\n* Technical interview: first stage\n\n* Client Interview\n\n* Feedback/Offer\n\n\n\n\n*The recruitment process may be modified \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$57,500 — $92,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nKrakรณw, Lesser Poland Voivodeship, Poland
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout the Role\n\nThe Solutions Engineer plays a pivotal role in DriveWealth's growth and expansion by leading the API integration of new partners, educating on new products, and identifying processes that can be automated to optimize team and partner efficiency. \n\nWhat Youโll Do\n\n\n* Program Management: Identify and implement process improvements that maximize time efficiency for DriveWealth employees and partners. Use your strong programming knowledge to quickly implement new automation in ServiceNow, Github, and other ancillary programs\n\n* Process Optimization: Define and document repeatable processes, ensuring operational efficiency and scalability while maintaining a keen eye for detail and quality assurance\n\n* Cross-functional Collaboration: Collaborate closely with internal teams, including sales, finance, product, technology, and operations, to develop cohesive strategies that prioritize partner success and drive revenue growth\n\n* Educate and Lead Partners: Align closely with partners who will be integrating DriveWealths API. Use your robust programming knowledge to provide education, project management, and architecture support while maintaining positive partner relationships\n\n* Accountability and Advocacy: Establish a culture of mutual accountability between the Relationship Management team and partners. Advocate for their needs and marshal resources to support them while aligning with DriveWealth's overarching vision\n\n* Identify Product Bugs: Use your debugging skills to identify pain points and reported bugs by partners while suggesting and introducing changes to our product team\n\n\n\n\nWhat Youโll Need\n\n\n* Five years of experience in software development, with a focus on the implementation of APIs\n\n* Proficiency in full stack development, with working knowledge of cloud-based platform tools such as AWS and/or Google Cloud Platform\n\n* Proficiency in implementing event-based programs, working with tools such as Apache Kafka and AWS SQS \n\n* Exceptional written and verbal communication skills\n\n* Strong time management abilities with the capacity to prioritize and manage multiple partner engagements concurrently\n\n* Proven problem-solving skills and meticulous attention to detail\n\n* Ability to collaborate effectively with cross-functional teams and external partners\n\n* Customer-centric mindset with a commitment to ensuring client success and satisfaction\n\n* Proactive, self-motivated, and capable of working independently while taking ownership of client relationships\n\n* You thrive in a fast-paced environment and can deliver results under pressure\n\n* You are driven by a passion for partner success and business growth\n\n\n\n\nApplicants must be authorized to work for any employer in the U.S. DriveWealth is unable to sponsor or take over sponsorship of an employment Visa at this time. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, API and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nNew York City, New York, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nVerana Health, a digital health company that delivers quality drug lifecycle and medical practice insights from an exclusive real-world data network, recently secured a $150 million Series E led by Johnson & Johnson Innovation โ JJDC, Inc. (JJDC) and Novo Growth, the growth-stage investment arm of Novo Holdings. \n\nExisting Verana Health investors GV (formerly Google Ventures), Casdin Capital, and Brook Byers also joined the round, as well as notable new investors, including the Merck Global Health Innovation Fund, THVC, and Breyer Capital.\n\nWe are driven to create quality real-world data in ophthalmology, neurology and urology to accelerate quality insights across the drug lifecycle and within medical practices. Additionally, we are driven to advance the quality of care and quality of life for patients. DRIVE defines our internal purpose and is the galvanizing force that helps ground us in a shared corporate culture. DRIVE is: Diversity, Responsibility, Integrity, Voice-of-Customer and End-Results. Click here to read more about our culture and values. \n\n Our headquarters are located in San Francisco and we have additional offices in Knoxville, TN and New York City with employees working remotely in AZ, CA, CO, CT, FL, GA, IL, LA, MA, NC, NJ, NY, OH, OR, PA, TN, TX, UT , VA, WA, WI. All employees are required to have permanent residency in one of these states. Candidates who are willing to relocate are also encouraged to apply. \n\nJob Title: Data Engineer\n\nJob Intro:\n\nAs a Data/Software Engineer at Verana Health, you will be responsible for extending a set of tools used for data pipeline development. You will have strong hands-on experience in design & development of cloud services. Deep understanding of data quality metadata management, data ingestion, and curation. Generate software solutions using Apache Spark, Hive, Presto, and other big data frameworks. Analyzing the systems and requirements to provide the best technical solutions with regard to flexibility, scalability, and reliability of underlying architecture. Document and improve software testing and release processes across the entire data team.\n\nJob Duties and Responsibilities:\n\n\nArchitect, implement, and maintain scalable data architectures to meet data processing and analytics requirements utilizing AWS and Databricks\n\nAbility to troubleshoot complex data issues and optimize pipelines taking into consideration data quality, computation and cost.\n\nCollaborate with cross-functional teams to understand and translate data needed into effective data pipeline solutions\n\nDesign solutions to solving problems related to ingestion and curation of highly variable data structures in a highly concurrent cloud environment.\n\nRetain metadata for tracking of execution details to reproducibility and providing operational metrics.\n\nCreate routines to add observability and alerting to the health of pipelines.\n\nEstablish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.\n\nResearch , perform proof-of-concept and leverage performant database technologies(like Aurora Postgres, Elasticsearch, Redshift) to support end user applications that need sub second response time.\n\nParticipate in code reviews.\n\nProactive in staying updated with industry trends and emerging technologies in data engineering.\n\nDevelopment of data services using RESTful APIโs which are secure(oauth/saml), scalable(containerized using dockers), observable (using monitoring tools like datadog, elk stack), documented using OpenAPI/Swagger by using frameworks in python/java and automated CI/CD deployment using Github actions.\n\nDocument data engineering processes , architectures, and configurations.\n\n\n\n\nBasic Requirements:\n\n\nA minimum of a BS degree in computer science, software engineering, or related scientific discipline.\n\nA minimum of 3 years of experience in software development\n\nStrong programming skills in languages such as Python/Pyspark, SQL\n\nExperience with Delta lake, Unity Catalog, Delta Sharing, Delta live tables(DLT)\n\nExperience with data pipeline orchestration tools - Airflow, Databricks Workflows\n\n1 year of experience working in AWS cloud computing environment, preferably with Lambda, S3, SNS, SQS\n\nUnderstanding of Data Management principles(governance, security, cataloging, life cycle management, privacy, quality)\n\nGood understanding of relational databases.\n\nDemonstrated ability to build software tools in a collaborative, team oriented environment that are product and customer driven.\n\nStrong communication and interpersonal skills\n\nUtilizes source code version control.\n\nHands-on experience with Docker containers and container orchestration.\n\n\n\n\nBonus:\n\n\nHealthcare and medical data experience is a plus.\n\nAdditional experience with modern compiled programming languages (C++, Go, Rust)\n\nExperience building HTTP/REST APIs using popular frameworks\n\nBuilding out extensive automated test suites\n\n\n\n\nBenefits:\n\nWe provide health, vision, and dental coverage for employees\n\n\n\n\n\n\n\nVerana pays 100% of employee insurance coverage and 70% of family\n\nPlus an additional monthly $100 individual / $200 HSA contribution with HDHP\n\n\n\n\n\n\n\n\nSpring Health mental health support\n\nFlexible vacation plans\n\nA generous parental leave policy and family building support through the Carrot app\n\n$500 learning and development budget\n\n$25/wk in Doordash credit\n\nHeadspace meditation app - unlimited access\n\nGympass - 3 free live classes per week + monthly discounts for gyms like Soulcycle\n\n\n\n\nFinal note:\n\nYou do not need to match every listed expectation to apply for this position. Here at Verana, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.\n\n \n\n \n\n \n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Docker, Testing, Cloud and Engineer jobs that are similar:\n\n
$70,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWho We Are:\n\nSmithRx is a rapidly growing, venture-backed Health-Tech company. Our mission is to disrupt the expensive and inefficient Pharmacy Benefit Management (PBM) sector by building a next-generation drug acquisition platform driven by cutting edge technology, innovative cost saving tools, and best-in-class customer service. With hundreds of thousands of members onboarded since 2016, SmithRx has a solution that is resonating with clients all across the country.\n\nWe pride ourselves for our mission-driven and collaborative culture that inspires our employees to do their best work. We believe that the U.S healthcare system is in need of transformation, and we come to work each day dedicated to making that change a reality. At our core, we are guided by our company values:\n\n\n* Integrity: Do the right thing. Especially when itโs hard.\n\n* Courage: Embrace the challenge.\n\n* Together: Build bridges and lift up your colleagues.\n\n\n\n\nJob Summary:\n\nSmithRx is innovating in Pharmacy Benefits Management (PBM) with a next-gen platform, transforming how businesses manage pharmacy benefits. Our advanced technology offers real-time insights for cost efficiency, improved clinical services, and an enhanced customer experience. As part of SmithRx's product & engineering organization, the data engineering team is committed to creating a scalable and reliable data ecosystem, a vital foundation for delivering excellent service and operational superiority to our customers.\n\nWe are currently seeking a highly motivated Senior Data Engineer to join our fast-paced data team. The ideal candidate will work closely with cross-functional teams to develop scalable data pipelines, optimize data workflows, and ensure data quality and reliability. This should also include a strong background in data engineering, with expertise in data modeling, ETL processes, and cloud technologies.\n\nWhat you will do:\n\n\n* Design and implement scalable data models in enterprise data warehouse to support the company's analytical and reporting needs.\n\n* Develop and optimize ETL processes to ingest, transform, and load data from various sources into a data warehouse.\n\n* Collaborate with internal stakeholders, including Data Analytics team, to understand data requirements and translate them into technical solutions.\n\n* Build and maintain data warehouses, data lakes, and other data storage solutions to store and manage large volumes of structured and unstructured data.\n\n* Implement and enforce data governance policies to ensure PII/PHI protection, security, and compliance.\n\n* Monitor and optimize ETL jobs, database performance and data warehouse queries.\n\n* Document data engineering processes, data models, and design for knowledge sharing and reference.\n\n* Mentor junior data engineers and provide technical guidance and support to team members.\n\n\n\n\nWhat you will bring to SmithRx:\n\n\n* Bachelor's degree above in Computer Science, Information Technology, or a related field.\n\n* 5+ years of related experience in data engineering, software engineering, including proven experience as a Data Engineer with expertise in data warehouse technologies.\n\n* Strong programming skills, particularly in languages such as Python, Java. Proficiency in SQL, PySpark.\n\n* Strong programming skills, particularly in languages such as Python, Java. Proficiency in SQL, PySpark\n\n* Solid understanding of data modeling concepts and database design principles.\n\n* Hands-on experience with ETL tools and frameworks (e.g., Apache Spark, Apache Airflow, DBT, Looker)\n\n* Strong problem-solving abilities and attention to detail.\n\n* Excellent communication and collaboration skills.\n\n* Positivity; non-dogmatic, team-first attitude\n\n* Flexibility; someone who is responsive and comfortable with ambiguity\n\n* Start-up or healthcare experience is highly desirable\n\n\n\n\nWhat SmithRx Offers You: \n\n\n* Total Rewards package that includes incentive bonus and stock options\n\n* Highly competitive wellness benefits including Medical, Pharmacy, Dental, Vision, and Life Insurance and AD&D Insurance\n\n* Flexible Spending Benefits \n\n* 401(k) Retirement Savings Program \n\n* Short-term and long-term disability\n\n* Discretionary Paid Time Off \n\n* 12 Paid Holidays\n\n* Wellness Benefits\n\n* Commuter Benefits \n\n* Paid Parental Leave benefits\n\n* Employee Assistance Program (EAP)\n\n* Well-stocked kitchen in office locations\n\n* Professional development and training opportunities\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Cloud, Senior and Engineer jobs that are similar:\n\n
$65,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nJoin SADA India as a Senior Data Engineer!\n\nYour Mission \n\nAs a Sr. Data Engineer on the Enterprise Support service team at SADA, you will reduce customer anxiety about running production workloads in the cloud by implementing and iteratively improving observability and reliability. You will have the opportunity to engage with our customers in a meaningful way by defining, measuring, and improving key business metrics; eliminating toil through automation; inspecting code, design, implementation, and operational procedures; enabling experimentation by helping create a culture of ownership; and winning customer trust through education, skill sharing, and implementing recommendations. Your efforts will accelerate our customersโ cloud adoption journey and we will be with them through the transformation of their applications, infrastructure, and internal processes. You will be part of a new social contract between customers and service providers that demands shared responsibility and accountability: our partnership with our customers will ensure we are working towards a common goal and share a common fate.\n\nThis is primarily a customer-facing role. You will also work closely with SADAโs Customer Experience team to execute their recommendations to our customers, and with Professional Services on large projects that require PMO support.\n\nPathway to Success \n\n#MakeThemRave is at the foundation of all our engineering. Our motivation is to provide customers with an exceptional experience in migrating, developing, modernizing, and operationalizing their systems in the Google Cloud Platform.\n\nYour success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured bi-yearly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.\n\nAs you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks.\n\nExpectations\n\nCustomer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives.\n\nOnboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals.\n\nJob Requirements\n\nRequired Credentials:\n\n\n* Google Professional Data Engineer Certified or able to complete within the first 45 days of employment \n\n* A secondary Google Cloud certification in any other specialization\n\n\n\n\nRequired Qualifications: \n\n\n* 5+ years of experience in Cloud support\n\n* Experience in supporting customers preferably in 24/7 environments\n\n* Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)\n\n* Experience writing software in at least two or more languages such as Python, Java, Scala, or Go\n\n* Experience in building production-grade data solutions (relational and NoSQL)\n\n* Experience with systems monitoring/alerting, capacity planning, and performance tuning\n\n* Experience with BI tools like Tableau, Looker, etc will be an advantage\n\n* Consultative mindset that delights the customer by building good rapport with them to fully understand their requirements and provide accurate solutions\n\n\n\n\nUseful Qualifications:\n\n\n* \n\n\n* Mastery in at least one of the following domain areas:\n\n\n\n\n\n* \n\n\n* Google Cloud DataFlow: building batch/streaming ETL pipelines with frameworks such as Apache Beam or Google Cloud DataFlow and working with messaging systems like Pub/Sub, Kafka, and RabbitMQ; Auto scaling DataFlow clusters, troubleshooting cluster operation issues\n\n* Data Integration Tools: building data pipelines using modern data integration tools such as Fivetran, Striim, Data Fusion, etc. Must have hands-on experience configuring and integrating with multiple Data Sources within and outside of Google Cloud\n\n* Large Enterprise Migration: migrating entire cloud or on-prem assets to Google Cloud including Data Lakes, Data Warehouses, Databases, Business Intelligence, Jobs, etc. Provide consultations for optimizing cost, defining methodology, and coming up with a plan to execute the migration.\n\n\n\n\n\n\n\n\n\n* Experience with IoT architectures and building real-time data streaming pipelines\n\n* Experience operationalizing machine learning models on large datasets\n\n* Demonstrated leadership and self-direction -- a willingness to teach others and learn new techniques\n\n* Demonstrated skills in selecting the right statistical tools given a data analysis problem\n\n* Understanding of Chaos Engineering\n\n* Understanding of PCI, SOC2, and HIPAA compliance standards\n\n* Understanding of the principle of least privilege and security best practices\n\n* Understanding of cryptocurrency and blockchain technology\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nThiruvananthapuram, Kerala, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nABOUT THRIVE MARKET \n\n\nThrive Market was founded in 2014 with a mission to make healthy living easy and affordable for everyone. As an online, membership-based market, we deliver the highest quality healthy and sustainable products at member-only prices, while matching every paid membership with a free one for someone in need. Every day, we leverage innovative technology and member-first thinking to help our over 1,000,000+ members find better products, support better brands, and build a better world in the process. We recently reached a significant milestone by becoming a Certified B Corporation, making us the largest grocer to earn this coveted qualification.\n\n\nTHE ROLE\n\n\nThrive Marketโs Data Engineering team is seeking a Senior Data Engineer!\n\n\nWe are looking for a brilliant, dedicated, and hardworking engineer to help us build high-impact products alongside our Data Strategy Team. Our site sees millions of unique visitors every month, and our customer growth currently makes us one of the fastest-growing e-commerce companies in Los Angeles. We are looking for a Senior Data Engineer with hands-on experience working on structured/semi-structured/Complex data processing and streaming frameworks. We need your amazing software engineering skills to help us execute our Data Engineering & Analytics initiatives and turn them into products that will provide great value to our members. In this role, we are hoping to bring someone in who is equally excited about our mission, learning the tech behind the company, and can work cross-functionally with other engineering teams. \n\n\n\nRESPONSIBILITIES\n* Work across multiple projects and efforts to orchestrate and deliver cohesive data engineering solutions in partnership with various functional teams at Thrive Market\n* Be hands-on and take ownership of the complete cycle of data services, from data ingestion, data processing, ETL to data delivery for reporting\n* Collaborate with other technical teams to deliver data solutions which meet business and technical requirements; define technical requirements and implementation details for the underlying data lake, data warehouse and data marts\n* Identify, troubleshoot and resolve production data integrity and performance issues\n* Collaborate with all areas of data management as lead to ensure patterns, decisions, and tooling is implemented in accordance with enterprise standards\n* Perform data source gap analysis and create data source/target catalogs and mappings\n* Develop a thorough knowledge and understanding of cross system integration, interactions and relationships in order to develop an enterprise view of Thrive Marketโs future data needs\n* Design, coordinate and execute pilots/prototypes/POC to provide validation on specific scenarios and provide implementation roadmap \n* Recommend/Ensure technical functionality (e.g. scalability, security, performance, data recovery, reliability, etc.) for Data Engineering\n* Facilitate workshops to define requirements and develop data solution designs\n* Apply enterprise and solution architecture decisions to data architecture frameworks and data models\n* Maintain a repository of all data architecture artifacts and procedures\n* Collaborate with IT teams, software providers and business owners to predict and devise data architecture that addresses business needs for collection, aggregation and interaction with multiple data streams\n\n\n\nQUALIFICATIONS\n* Hands on experience programming in Python, Scala or Java\n* Expertise with RDBMS and Data Warehousing (Strong SQL) with Redshift,Snowflake or similar\n* In-depth knowledge and experience with data and information architecture patterns and implementation approaches for Operational Data Stores, Data Warehouses, Data Marts and Data Lakes\n* Proficiency in logical/physical data architecture, design and development\n* Experience in Data lake / Big data analytics platform implementation either cloud based or on-premise; AWS preferred \n* Experience working with high volumes of data; experience in design, implementation and support of highly distributed data applications \n* Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration E.g. GitHub, Jenkins, Concourse, Airflow, Terraform\n* Experience with writing Kafka producers and consumers or experience with AWS Kinesis \n* Hands-on experience developing a distributed data processing platform with Big Data technologies like Hadoop, Spark etc\n* A knack for independence (hands-on) as well as team work\n* Excellent analytical and problem-solving skills, often in light of ill-defined issues or conflicting information.\n* Experience with streaming data ingestion, machine-learning, Apache Spark a plus\n* Adept in the ability to elicit, gather, and manage requirements in an Agile delivery environment\n* Excellent communication and presentation skills (verbal, written, presentation) across all levels of the organization. Ability to translate ambiguous concepts into tangible ideas.\n\n\n\nBELONG TO A BETTER COMPANY\n* Comprehensive health benefits (medical, dental, vision, life and disability)\n* Competitive salary (DOE) + equity\n* 401k plan\n* Flexible Paid Time Off\n* Subsidized ClassPass Membership with access to fitness classes and wellness and beauty experiences\n* Ability to work in our beautiful co-working space at WeWork in Playa Vista and other locations\n* Dog-Friendly Office\n* Free Thrive Market membership and discount on private label products\n* Coverage for Life Coaching & Therapy Sessions on our holistic mental health and well-being platform\n\n\nWe're a community of more than 1 Million + members who are united by a singular belief: It should be easy to find better products, support better brands, make better choices, and build a better world in the process.\nFeeling intimidated or hesitant about applying because you donโt meet every requirement? Studies prove that women and people of color are less likely to apply for jobs if they do not meet every single qualification. At Thrive Market, we believe in building a diverse, inclusive, and authentic culture. If you are excited about this role along with our mission and values, we sincerely encourage you to apply anyways! As the great Los Angeles King Wayne Gretzky said, โYou miss 100% of the shots you donโt take.โ Take the shot! \nThrive Market is an EOE/Veterans/Disabled/LGBTQ employer\nAt Thrive Market, our goal is to be a diverse and inclusive workplace that is representative, at all job levels, of the members we serve and the communities we operate in. Weโre proud to be an inclusive company and an Equal Opportunity Employer and we prohibit discrimination and harassment of any kind. We believe that diversity and inclusion among our teammates is critical to our success as a company, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool. If youโre thinking about joining our team, we expect that you would agree!\nIf you need assistance or accommodation due to a disability, please email us at [email protected] and weโll be happy to assist you.\nEnsure your Thrive Market job offer is legitimate and don't fall victim to fraud. Thrive Market never seeks payment from job applicants. Thrive Market recruiters will only reach out to applicants from an @thrivemarket.com email address. For added security, where possible, apply through our company website at www.thrivemarket.com.\nยฉ Thrive Market 2023 All rights reserved.\n\nJOB INFORMATION\n* Compensation Description - The base salary range for this position is $160,000 - $190,000/Per Year. \n* Compensation may vary outside of this range depending on several factors, including a candidateโs qualifications, skills, competencies and experience, and geographic location. \n* Total Compensation includes Base Salary, Stock Options, Health & Wellness Benefits, Flexible PTO, and more! \n\n\n\n\n\n#LI-DR1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Cloud, Scala, Senior and Engineer jobs that are similar:\n\n
$55,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLos Angeles or Remote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Theorem Partners and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nRole Mission: Theorem's Site Reliability Engineer will be responsible for accelerating the delivery of working, reliable software into production through the systematic application of software automation to the firm’s IT operations. \n\nLevel: Senior engineer with previous career experience and domain expertise in engineering operations directly focused on cloud infrastructure. \n\nWhat You'll Be Doing\n\nServe as the technical operations lead for the following systems:\n\n\n* Cloud configuration (AWS + IONOS / Terraform)\n\n\n* Complete a turndown of Theorem's IONOS datacenter presence.\n\n* Theorem's cloud environment to be completely codified in Terraform and reconfigurable via a CD pipeline.\n\n* Reduce privileges of technical employees to require creation of production infrastructure via Terraform.\n\n\n\n\n\n* Container orchestration (our Kubernetes clusters)\n\n\n* Create an access audit log for our production Kubernetes clusters.\n\n* Perform cluster software upgrades.\n\n* Provision a high performance Ceph storage cluster.\n\n\n\n\n\n* Continuous integration & deployment (currently Jenkins)\n\n\n* Build a continuous delivery pipeline for deploying changes to alerting infrastructure.\n\n* Explore alternative cloud-native CI/CD offerings to potentially replace Jenkins\n\n\n\n\n\n* Monitoring & alerting (Prometheus / Alertmanager / PagerDuty)\n\n\n* Redefine our alert-handling policies to significantly reduce the frequency of unhandled alerts.\n\n* Creation of a cost allocation dashboard that associates cloud costs to business programs and reveals how Theorem spends its money on compute resources.\n\n* Creation of an alert SLO dashboard that reveals how much time we spend dealing with operational toil and which systems are being operationally neglected.\n\n\n\n\n\n\n\n\nSupport and assist the following data infrastructure objectives:\n\n\n* Complete a database migration away from AWS Redshift to Parquet in S3.\n\n* Deploy a distributed dataflow engine (e.g. Apache Spark) for use in the data pipeline\n\n* Streamline the data pipeline deployment process.\n\n* Ingestion of systems operation data into the data warehouse.\n\n\n\n\nCompetencies\n\n\n* Experience with and knowledge of public cloud offerings such as AWS (preferred), GCP, Azure, etc.\n\n* Previous experience with infrastructure automation tools such as Terraform (preferred), CloudFormation, Ansible, Chef, Puppet, etc.\n\n* Experience with various flavors of the Linux operating system, including shell scripting in bash or zsh.\n\n* Working knowledge of Python.\n\n* Previous experience working with container orchestration systems, such as Kubernetes (preferred), Borg, Twine, Apache Mesos, etc.\n\n* Has a deep appreciation for continuous integration and deployment (CI/CD).Is familiar with standard practices in software monitoring and alerting.\n\n* Has a working knowledge of SQL, data warehousing, and ETL pipelines.\n\n* Understanding of staging environments, integration and release testing.\n\n* Attention to detail\n\n* Hardworking and gritty\n\n* Sharp and fast learner\n\n* Transparency & intellectual honesty\n\n* Welcomes and adapts behavior to feedback\n\n* Collaborative and team-oriented\n\n* Ability to communicate and escalate issues\n\n\n\n\nTraining & Experience\n\n\n* Previous experience and subject matter expert in aspects of build engineering. Experience using modern build systems such as Bazel, Buck, Pants, Please, etc. is desired, but not a hard requirement. If they have no previous experience with any of those tools, they should signal enthusiasm for learning to use and extend Bazel.\n\n* Has a deep appreciation for continuous integration and deployment (CI/CD). All engineers contribute CI/CD pipelines, but our hire will help us organize and streamline our CI pipelines to make it simple for other engineers to contribute to.\n\n* Has previous experience managing containerized workloads. Ideally previous experience with Kubernetes, but not required. Our hire will eventually become a primary owner for our Kubernetes clusters.\n\n* Is familiar with standard practices in software monitoring and alerting. They will be expected to create software monitors and configure alert routing so that the appropriate engineers are notified of failures.\n\n* Understands how to use staging environments and integration testing to create system release tests. All engineers are expected to create release tests; our hire will help define standard testing environments that makes it easier for others to write release tests.\n\n* Bachelor's degree or higher in computer science, software engineering, or related discipline.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Admin, Senior, Engineer, Sys Admin, Cloud, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for DataTruth and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nThe role:\n\nDo you thrive on simplifying complicated systems and creating structure? We're looking for a data engineer with a passion for designing data platforms and solutions for businesses to access the information they need easily.\n \n You'll have a proven track record in the architecture and optimisation of data systems and building the data pipeline from the bottom up. You must be able to select appropriate technology to match the needs of the business, while taking into account their current applications, standards and assets.\n \n As a senior programmer, you'll have a methodical approach to streamlining data sources and be able to communicate effectively across teams to solve their challenges. In return, we'll offer a competitive rate and some amazing perks, plus we're a fun team to work with, we promise.\n\nResponsibilities:\n\n\n* Plan and implement an optimal data pipeline architecture\n\n* Make sense of complex data sets and models to fulfil business requirements\n\n* Identify and implement process improvements, automate manual processes, optimise data delivery, and design the infrastructure for scalability\n\n* Enable optimal extraction, transformation, and loading of data from a range of data sources using SQL and big data technologies\n\n* Build analytics tools from the data pipeline to provide actionable business performance metrics such as for customer acquisition and operational efficiency\n\n* Keep data secure\n\n* Strive for innovative functionality of data systems by working with data and analytics experts\n\n\n\n\nSkills & Competencies:\n\nThe successful candidate will have:\n\n\n* Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL)\n\n* Programming languages including Python, possibly Node\n\n* Working knowledge of a variety of databases (desirable)\n\n* Experience designing, building and optimizing 'big data' data pipelines, architectures and data sets\n\n* Strong analytic skills related to working with unstructured datasets\n\n* Experience building processes supporting data transformation, data structures, metadata, dependency and workload management\n\n* A successful history of manipulating, processing and extracting value from large disconnected datasets\n\n* Strong project management and organisational skills\n\n* 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another similar field.\n\n\n\n\nDesirable experience: \n\n\n* Google Cloud Platform (GCP), AWS, Apache Spark, Hadoop\n\n* Devops, CI/CD and infrastructure as code\n\n* Open source data visualisation platforms such as Plotly, Tableau, Qlikview, Fusioncharts, Sisense, Chartio or similar\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Cloud, Senior and Apache jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for BlueLabs Software and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nA few months ago we started out with the vision of building a next generation sports betting platform focused on performance, reliability, modularity and automation. We believe that our experience paired with today’s technologies, great talent and the agility of a startup environment will enable us to deliver a best-in-class product that meets the demands of the market of tomorrow. \n\nWe are now on the lookout for a Lead Infrastructure Engineer who wants to join our distributed team and help us execute our vision. \n\nAs Lead Infrastructure Engineer you will be responsible for ensuring that our different engineering teams have their needs covered when it comes to cross-cutting concerns such as Infrastructure, Security, Instrumentation and CI/CD Pipelines.\n\nYour deliverables will have a direct impact on the day-to-day work of all our engineers. You will help them reduce the time to market of the tasks and features they are working on. Whilst development and operational efficiency will be your primary goal, you should also have an eye for performant and resilient infrastructures.\n\nYour responsibilities as Lead Infrastructure Engineer at BlueLabs will include:\n\n\n* \n\nEnable the different teams to deliver and operate their services on their own\n\n\n* \n\nBe a strong advocate for keeping all things automated and under version control\n\n\n* \n\nProvide DevOps and SRE expertise and best practices to all developers within the organisation\n\n\n* \n\nWork hand in hand with the Engineering Leads to execute the company's strategy and assist the development teams when it comes to topics such as Infrastructure, Security, Instrumentation and CI/CD pipelines\n\n\n* \n\nBe the first point of contact when it comes to cross-cutting concerns, while ensuring that you do not become a single point of failure\n\n\n\n\n\nBlueLab’s Sports Betting platform consists of tens of microservices written in Elixir, Go and Scala. You are not required to have professional experience with either of these languages but being interested in them will be valued positively, since it will help you better understand the needs of our Engineering teams.\n\nRemote Work\n\nWe are hiring for talent, not for a specific location. You will find that members of our team are distributed all over Europe. Being a distributed team enables us to hire only the best, without being restricted to the talent pool available at a specific geographic location. However, to facilitate team communication and collaboration we currently require you to be located in a European time zone (between UTC-1 and UTC+3). You must also be able to travel to other European locations a few times a year for on-site meetings and workshops.\n\nCompensation\n\nThe base compensation range for this role is €65k-90k annually, depending on your background and experience. It can be subject to an adjustment of up to 15% in either direction if the cost-of-living in your region is far above or below the European average. As an independent contractor you will be responsible for paying any taxes or applicable fees in your country of residence (unless you are based in Malta, in which case you will be employed). In addition to that, we offer a number of perks to each of our team members as we truly believe in a healthy work-life balance and continuous learning.\n\nRequirements\n\n\n* 5+ years of professional experience working as a DevOps (or Systems) Engineer\n\n* Hands-on experience running Kubernetes in production\n\n* Professional experience with GCP and/or AWS\n\n* Deep understanding of cloud technologies (e.g. GCP), container orchestration platforms (e.g. Kubernetes), infrastructure as code (e.g. Terraform), code instrumentation (e.g. Prometheus, Grafana, Splunk) and CI/CD pipelines (e.g. Jenkins)\n\n* Interest in distributed systems\n\n* Interest in microservice architecture, data- and message-based technologies (e.g. RabbitMQ, Apache Kafka)\n\n* Interest in keeping yourself up to date and learning new languages, frameworks and technologies as required\n\n* Product-oriented mindset and eagerness to take part in shaping the products we build\n\n* Ability to work autonomously in a fully distributed team\n\n* Good communication skills in verbal and written English\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Executive, DevOps, Cloud, Travel and Apache jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nElastic is a search company with a simple goal: to solve the world's data problems with products that delight and inspire. As the creators of the Elastic Stack, we help thousands of organizations including Cisco, eBay, Grab, Goldman Sachs, ING, Microsoft, NASA, The New York Times, Wikipedia, and many more use Elastic to power mission-critical systems. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. We have a distributed team of Elasticians across 30+ countries (and counting), and our diverse open source community spans over 100 countries. Learn more at elastic.co\n\nAbout The Role\n\nThe Cloud team is responsible for the development of products such as Elastic Cloud Enterprise, Elastic Cloud on Kubernetes as well as the operation of our Elastic as a Service offering. The SaaS offerings are built on top of the cloud-products.\n\nWe are looking for a technology and engineering leader to help us realize our Cloud team’s goals. You will be responsible for technical design for new features and improving existing subsystems, and work with several functional areas in Cloud. Your responsibilities will include technical leadership that will enable Elastic products to be metered, reported for usage, and generate monthly invoices. The areas you will work with are impactful to Elastic - they contribute to Elastic’s SaaS consumption-based billing platform, chargeback features for on-premise product, and integration with AWS/GCP/Azure marketplaces. The data ingestion system we build to power these features process a critical stream of events with low-latency and real-time requirements.\n\nYou will participate in roadmap and project planning efforts and will have ownership for delivering it. You’ll be participating in project management efforts as the teams execute on plans, and you’ll have a role in communicating progress and status to various teams at Elastic.\n\nSome of the things you'll work on\n\n\n* Provide technical leadership across several functional areas in the team like metering, usage reporting, billing and invoicing, and marketplace integration across all products in Elastic\n\n* Work with Product Management to define new consumption models that will increase the flexibility of our SaaS offering, attracting more users. Your contributions will play a role in improving our conversion rates, and improving upgrades to higher subscription tiers\n\n* Work on a global scale, with all the major Cloud hosting providers: AWS, GCP, Azure, IBM Cloud etc and their marketplace solutions.\n\n* Work on creating a stable, scalable and reliable data ingestion pipeline built using Elastic products to harvest usage and telemetry data from multiple products.\n\n* Have a scope that covers contributing to technical plans and direction within Cloud and across other product teams in Elastic\n\n* Be a contact point in Cloud for other teams within Elastic. Examples include helping Support with difficult cases or consulting the Elastic Stack engineers with designing new features in a Cloud compatible way\n\n* Understand our company strategy and help translating it into technical terms and guide our Cloud product’s direction to realize it\n\n* Create technical designs and build POCs for new efforts, validating a wild idea works before committing to it\n\n* Collaborating with other Tech Leads on the Cloud team and across Elastic to align priorities and roadmaps, and make appropriate technology choices and compromises\n\n* Be hands on with the codebase. Review work done in the team, and provide constructive feedback\n\n* Help the team define coding practices and standards\n\n\n\n\nWhat you will bring along\n\n\n* Previous experience providing pragmatic technical leadership for a group of engineers\n\n* Previous experience in a role with ownership for technical direction and strategy, preferably in a start-up or scale-up environment\n\n* Experience designing data pipelines that ingest logs or metrics data from distributed systems\n\n* Proven experience as a software engineer, with a track record of delivering high quality code, preferably in Python, Java or Go\n\n* Experience implementing or deep knowledge of consumption-based SaaS billing platforms with features like overages, discounts, monthly and annual models, etc\n\n* Previous experience working with various partners outside of Engineering such as IT and Finance Operations teams\n\n* Technical depth in one or more technologies relevant for SaaS (orchestration, networking, docker, etc.)\n\n* Deal well with ambiguous problems; ability to think in simple solutions that reduce operational overhead and improve code maintainability\n\n* Interest in solving challenges in SaaS billing platform, in terms of accuracy, scale, and features that make users life easier\n\n\n\n\nNice to have\n\n\n* Experience with Elasticsearch as a user - understanding data modeling, aggregations and querying capabilities\n\n* Experience integrating applications with AWS, GCP or Azure marketplace solutions\n\n* Integrating with Cloud billing providers such as Stripe or Zoura\n\n\n\n\n#LI-CB1\n\nAdditional Information - We Take Care of Our People\n\nAs a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do.\n\nWe strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.\n\n\n* Competitive pay based on the work you do here and not your previous salary\n\n* Health coverage for you and your family in many locations\n\n* Ability to craft your calendar with flexible locations and schedules for many roles\n\n* Generous number of vacation days each year\n\n* Double your charitable giving — we match up to 1% of your salary\n\n* Up to 40 hours each year to use toward volunteer projects you love\n\n* Embracing parenthood with minimum of 16 weeks of parental leave\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.\n\nWhen you apply to a job on this site, the personal data contained in your application will be collected by Elasticsearch, Inc. (“Elastic”) which is located at 800 W. El Camino Real, Suite 350 Mountain View, CA 94040 USA, and can be contacted by emailing [email protected]. Your personal data will be processed for the purposes of managing Elastic’s recruitment related activities, which include setting up and conducting interviews and tests for applicants, evaluating and assessing the results thereto, and as is otherwise needed in the recruitment and hiring processes. Such processing is legally permissible under Art. 6(1)(f) of Regulation (EU) 2016/679 (General Data Protection Regulation) as necessary for the purposes of the legitimate interests pursued by Elastic, which are the solicitation, evaluation, and selection of applicants for employment. Your personal data will be shared with Greenhouse Software, Inc., a cloud services provider located in the United States of America and engaged by Elastic to help manage its recruitment and hiring process on Elastic’s behalf. Accordingly, if you are located outside of the United States, your personal data will be transferred to the United States once you submit it through this site. Because the European Union Commission has determined that United States data privacy laws do not ensure an adequate level of protection for personal data collected from EU data subjects, the transfer will be subject to appropriate additional safeguards under the standard contractual clauses. You can obtain a copy of the standard contractual clauses by contacting us at [email protected]. Elastic’s data protection officer is Daniela Duda, who can be contacted at [email protected]. We plan to keep your data until our open role is filled. We cannot estimate the exact time period, but we will consider this period ended when a candidate accepts our job offer for the position for which we are considering you. When that period is over, we may keep your data for an additional period no longer than 3 years in case additional opportunities present themselves in which yours skills might be better suited. For additional details, please see our Elastic Privacy Statement https://www.elastic.co/legal/privacy-statement. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Finance, Elasticsearch, Java, SaaS and Apache jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Edward Jones and want to re-open this job? Use the edit link in the email when you posted the job!
\n**You must reside in the United States**\n\nAt Edward Jones we are developing a next-generation analytics architecture to support our growing business. As part of the Data Architecture and Data Design team, you will be challenged daily to find innovative solutions to improve the speed and consumption of our information. This will also bring you into contact with a diverse array of stakeholders ranging from IT to our front line business organizations, and you will be working on defining an architecture platform that establishes the foundational layer for Edward Jones analytics for years to come.\n\nResponsibility Summary:\n\n\n\n* Design and deliver data and analytic platform requirements to ensure the highest levels of availability, performance and security of the firm's data and information\n\n* Responsible for working with other internal engineering roles and vendors to define required systems integrations to support initiatives\n\n* Drive significant data technology initiatives end to end, across multiple layers of architecture, and across multiple distribution technologies for operational and analytic environments\n\n* Drive design and implementation of durable data architecture and software solutions that will solve critical customer problems, performance barriers and security needs\n\n* Recommend development best practices for application development as it relates to integrating data requirements\n\n* Deliver technical design and implement highly available, scalable and secure data services with excellent quality\n\n* Partner with other groups both inside and outside of IT for cross-functional design and solution integration\n\n* Work with cross-functional team members from architecture, product management, and production development to develop, test, and release features\n\n* Actively stay abreast of mobile/ SaaS/ PaaS trends and standards, recommend best practices and share learning\n\n* Pursue and resolve complex or unchartered technical problems and share key learnings\n\n* Provide technical leadership and be a role model to data and analytic software engineers pursuing technical career path in engineering\n\n* Provide/ inspire innovations that fuel the growth of the firm as a whole\n\n* Coach and mentor other data platform engineers in process and methodologies\n\n* Provide perspective on leading industry trends, recommendations on new and emerging technologies, technology prototypes, patent proposals and engineering process improvements\n\n\n\n\n\n* Strong educational background with BS/ MS in Computer Science or related area\n\n* 8+ years of experience developing systems/ software for large business environments\n\n* Strong experience leading design and implementation of robust and highly scalable services\n\n* Strong Object-oriented design and Solution-oriented architecture principles, with the ability to implement them in a language of choice (Java, Spark)\n\n* Experience with SQL (DB2, AWS RDS, Oracle, MySQL, Google BigQuaery, SAPHana, Teradata, PostreSQL, SQLServer) and no-SQL is required (MongoDB, Cassandra, Apache HBase, MarkLogic)\n\n* Experience with Amazon AWS/ MSFT Azure or similar cloud platform is required (2-3 years preferred)\n\n* Experience with Microservices is highly desirable\n\n* Experience with full stack development and automation\n\n* Proven experience in installing, configuring and troubleshooting UNIX /Linux and Windows Server based environments\n\n* 5-7 years of experience in the administration and performance tuning of application stacks (Tomcat, JBoss, Apache, Ruby, NGINX)\n\n* Experience with virtualization and containerization (VMware, Virtual Box)\n\n* Experience with monitoring systems\n\n* Demonstrated ability in scripting skills (eshell scripts, Perl, Ruby, Python)\n\n* Demonstrated ability in networking knowledge (OSI network layers, TCP/IP\n\n* Able to operate at highly varying levels of abstraction from business strategy to product strategy to high level technical design to detailed technical design to implementation\n\n* Ability to handle a fast paced environment for iterative project turnarounds on mission critical systems\n\n* Passionate for continuous learning, experimenting and applying cutting edge technology and software paradigms\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Amazon, Cloud and Apache jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Thrive Global and want to re-open this job? Use the edit link in the email when you posted the job!
\n Thrive Global is changing the way people live through our behavior change platform and apps used to impact individual and organizational well-being and productivity. The marriage of data and analytics, our best-in-class content and science-backed behavior change IP will help people go from knowing what to do to actually doing it, enabling millions of consumers to begin the Thrive behavior change journey. As a technical lead on Thrive’s Data Science and Analytics team, you will play a significant role in building Thrive’s platform and products.\n\nWho We Are Looking For\n\n\n* A versatile engineering lead who has significant experience with data at all levels of maturity: raw telemetry through deployment and maintenance of models in operations \n\n* Is excited about collaborating with others, engineering and non-engineering, both learning & teaching as Thrive grows.\n\n* An innovator looking to push the boundaries of automation, intelligent ETL, AIOps and MlOps to drive high-quality insights and operational efficiency within the team\n\n* Has a proven track record of building and shipping data-centric software products\n\n* Desires a position that is approximately 75% individual technical contributions and 25% mentoring junior engineers or serving as a trusted advisor to engineering leadership\n\n* Is comfortable in a high growth, start-up environment and is willing to wear many hats and come up with creative solutions.\n\n\n\n\nHow You’ll Contribute\n\n\n* Collaborate with the Head of Data Science and Analytics to design an architecture and infrastructure to support data engineering and machine learning at Thrive\n\n* Implement a production-grade data science platform which includes building data pipeline, automation of data quality assessments, and automatic deployment of models into production\n\n* Develop new technology solutions to ensure a seamless transition of machine learning algorithms to production software, to enable the building out of easy to use datasets and to reduce other friction points within the data science life-cycle\n\n* Assist with building a small but skilled interdisciplinary team of data professionals: scientists, analysts, and engineers\n\n* Consider user privacy and security at all times\n\n\n\n\nRequired Skills\n\n\n* Master’s or Ph.D. degree in Computer Science or a related discipline (e.g., Mathematics, Physics)\n\n* 3+ years of technical leadership, team size of 5 or more, in data engineering or machine learning projects\n\n* 8+ years of industry experience with data engineering and machine learning\n\n* Extensive programming experience in Java or Python with applications in data engineering and machine learning.\n\n* Experience with data modeling, large-scale batch, and real-time data processing, ETL design, implementation and maintenance\n\n* Excellent verbal and written communication skills\n\n* Self-starter with a positive attitude, intellectual curiosity and a passion for analytics and solving real-world problems\n\n\n\n\nRelevant Technology and Tools Experience\nA good cross-section of experience in the following areas is desired:\n\n\n* AI/ML platforms: TensorFlow, Apache MXnet, Theano, Keras, CNTK, scikit-learn, H2O, Spark MLlib, AWS SageMaker, etc.\n\n* Relational databases: MySQL, Postgres, RedShift, etc.\n\n* Big data technologies: Spark, HDFS, Hive, Yarn, etc.\n\n* Data ingestion tools: Kafka, NiFi, Storm, Amazon Kinesis, etc.\n\n* Deployment technologies: Docker, Kubernetes, or OpenStack\n\n* Public Cloud: Azure, AWS or Google Cloud Platform\n\n\n\n\nOur Mission\nThrive Global’s mission is to end the stress and burnout epidemic by offering companies and individuals sustainable, science-based solutions to enhance well-being, performance, and purpose, and create a healthier relationship with technology. Recent science has shown that the pervasive belief that burnout is the price we must pay for success is a delusion. We know, instead, that when we prioritize our well-being, our decision-making, creativity, and productivity improve dramatically. Thrive Global is committed to accelerating the culture shift that allows people to reclaim their lives and move from merely surviving to thriving.\n\nWhat We Offer\n\n\n* A mission-driven company that’s truly making a difference in the lives of people around the world \n\n* Ability to develop within the company and shape our growth strategy\n\n* Human-centric culture with a range of wellness perks and benefits\n\n* Competitive compensation package\n\n* Medical, vision and dental coverage + 401k program with company match\n\n* Generous paid time-off programs \n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Machine Learning, Engineer, Executive, Teaching, Amazon, Java, Cloud, Python, Junior and Apache jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.\n\nElastic is seeking an Engineer to join the “Developer Tools” team inside Cloud Engineering. The charter for this team is to create self-service frameworks and developer environments that will empower our engineers ship quality products on our SaaS platform. You will work together with the Product Development and Operational teams to continuously improve developer efficiency and to provide tight feedback loop in the development cycle.\n\nWhat You Will Do:\n\n\n* Create and maintain “SaaS in a box” environments to test and validate features and bug fixes.\n\n* Empower engineers by creating a self-service platform to quickly validate changes\n\n* Create a framework to measure and report developer productivity as a metric. This framework will be used to identify bottlenecks, improve our processes and automation.\n\n* Collaborate with the release engineering team on initiatives like indexing test data in Elasticsearch, flaky test analysis and reporting on release efficiency\n\n* Collaborate with the SRE team to investigate production issues and improve any gaps that result from investigations\n\n* Grow and share your interest in technical outreach (blog posts, tech papers, conference speaking, etc\n\n\n\n\nWhat You Bring Along:\n\n\n* You have a software engineering background with a deep understanding of SaaS environments\n\n* You are passionate about building developer environments, automated tools and extensible frameworks that facilitate the entire engineering team\n\n* Technical experience in CI/CD, source control, release strategies\n\n* Experience with Docker is a must\n\n* Knowledge of scripting languages\n\n* Good working knowledge of Linux is required\n\n* Experience working with AWS services and EC2 environments\n\n* Experience with Scala and Java build ecosystem is a big plus\n\n\n\n\nAdditional Information:\n\nWe're looking to hire team members invested in realizing the goal of making real-time data exploration easy and available to anyone. As a distributed company, we believe that diversity drives our vibe! Whether you're looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life.\n\n\n\n* Competitive pay based on the work you do here and not your previous salary\n\n* Equity\n\n* Global minimum of 16 weeks of paid in full parental leave (moms & dads)\n\n* Generous vacation time and one week of volunteer time off\n\n* Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.\n\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Java, Scala, SaaS, Engineer, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.\n\nElastic’s Cloud product allows users to build new clusters or expand existing ones easily. This product is built on Docker based orchestration system to easily deploy and manage multiple Elastic Clusters.\n\nWhat You Will Do:\n\n\n* Work cross-functionally with product managers, analysts, and engineering teams to extract meaningful data from multiple sources\n\n* Develop analytical models to identify trends, calculate KPIs and identify anomalies. Use these models to generate reports and data dumps that enrich our KPI efforts\n\n* Design resilient data pipelines or ETL processes to collect, process and index business and operational data\n\n* Integrate several data sources like Salesforce, Postgres DB, Elasticsearch to create a holistic view of the Cloud business\n\n* Manage data collection services in production with the SRE team\n\n* Use Kibana and Elasticsearch to analyze business data. Help engineering and product teams to make data based decisions.\n\n* Grow and share your interest in technical outreach (blog posts, tech papers, conference speaking, etc.)\n\n\n\n\nWhat You Bring Along:\n\n\n* You are passionate about developing software that deliver quality data to stakeholders\n\n* Hands-on experience building data pipelines using technologies such as Elasticsearch, Hadoop, Spark\n\n* Experience developing models for KPIs such as user churn, trial conversion, etc\n\n* Ability to code in JVM based languages or Python\n\n* Experience with data modeling\n\n* Experience doing ad-hoc data analysis for key stakeholders\n\n* A working knowledge of Elasticsearch\n\n* Experience building dashboards in Kibana\n\n* Experience working with ETL tools such as Logstash, Apache NiFi, Talend is a plus\n\n* Deep understanding of relational as well as NoSQL data stores is a plus\n\n* A self starter who has experience working across multiple technical teams and decision makers\n\n* You love working with a diverse, worldwide team in a distributed work environment\n\n\n\n\nAdditional Information\n\n\n* Competitive pay and benefits\n\n* Equity\n\n* Catered lunches, snacks, and beverages in most offices\n\n* An environment in which you can balance great work with a great life\n\n* Passionate people building great products\n\n* Employees with a wide variety of interests\n\n* Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, Elasticsearch, NoSQL and Apache jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.\n\nWe are seeking a Cloud Engineer to join the Elastic Cloud team and help us make an immediate impact to our strategy and implementation of our Elastic Cloud and Elastic Cloud Enterprise products. You will make it easy for customers to use our products when deploying the Elastic Stack, either on our hosted service or on an infrastructure of their own choosing. You can grasp the opportunity to help lead our Cloud efforts.\n\nOur cloud product allows users to create new clusters or expand existing ones easily This product would be built on technologies such as OpenStack, AWS, Docker, and others to enable the Operations Teams to easily create and handle multiple Elastic Clusters. Does this sound like you?\n\nWhat you will be doing:\n\n\n* Develop software for our distributed systems and ES as a Service offerings.\n\n* Have familiarity with failures and complications related to a cloud offering.\n\n* Debugging meaningful technical issues inside a very deep and complex technical stack involving virtualization, containers, microservices, etc.\n\n* Assist in updating and maintaining a scalable cloud offering working across multiple clusters\n\n* Collaborate with Elastic’s engineering teams like Elasticsearch, Kibana, Logstash and Beats) to enable them to run on Cloud infrastructure\n\n\n\nWhat You Bring Along:\n\n\n\n* CS or MS Degree in Computer Science\n\n* 5+ years of object-oriented development with at least Scala or Java\n\n* Familiarity with systems like ZooKeeper, etcd, Consul etc is a huge plus\n\n* Experience or familiarity with Docker, OpenStack, or AWS\n\n* Extensive experience crafting solutions for the server side of scalable cloud software applications and platforms\n\n* Shown ability to architect a highly distributed cloud system\n\n* Experience or ability to respond to operational issues\n\n* A self starter who with experience with multi-functional technical teams and decision makers.\n\n* You care deeply about resiliency and quality of the features you ship\n\n* You love working with a worldwide team in a distributed work environment\n\n\n\nAdditional Information:\n\n\n\n* Competitive pay and stock options\n\n* Catered lunches, snacks, and beverages in most offices\n\n* An environment in which you can balance great work with a great life\n\n* Passionate people building excellent products\n\n* Employees with a wide variety of interests\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, Scala and Apache jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans over 100 countries.\n\nWe are seeking a Cloud Engineer to join the Elastic Cloud team and help us make an immediate impact on our strategy and implementation of our Elastic Cloud and Elastic Cloud Enterprise products. You will make it easy for customers to use our products when deploying the Elastic Stack, either on our hosted service or infrastructure of their choosing. You can grasp the opportunity to help lead our Cloud efforts.\n\nOur cloud product allows users to create new clusters or expand existing ones easily This product would be built on technologies such as OpenStack, AWS, Docker, and others to enable the Operations Teams to create and handle multiple Elastic Clusters easily. Does this sound like you?\n\nWhat you will be doing:\n\n\n* Develop software for our distributed systems and ES as a Service offering.\n\n* Have familiarity with failures and complications related to a cloud offering.\n\n* Debugging significant technical issues inside a very deep and complex technical stack involving virtualization, containers, microservices, etc.\n\n* Assist in updating and maintaining a scalable cloud offering working across multiple clusters\n\n* Collaborate with Elastic’s engineering teams like Elasticsearch, Kibana, Logstash, and Beats) to enable them to run on Cloud infrastructure\n\n\n\n\nWhat You Bring Along:\n\n\n* CS or MS Degree in Computer Science\n\n* 5+ years of object-oriented development with at least Scala or Java\n\n* Familiarity with systems like ZooKeeper, etc., Consul etc is a huge plus\n\n* Experience or familiarity with Docker, OpenStack, or AWS\n\n* Extensive expertise crafting solutions for the server side of scalable cloud software applications and platforms\n\n* Shown ability to architect a highly distributed cloud system\n\n* Experience or ability to respond to operational issues\n\n* A self-starter who with experience with multi-functional technical teams and decision makers.\n\n* You care deeply about resiliency and quality of the features you ship\n\n* You love working with a worldwide team in a distributed work environment\n\n\n\n\nAdditional Information:\n\n\n* Competitive pay and stock options\n\n* Catered lunches, snacks, and beverages in most offices\n\n* An environment in which you can balance great work with a great life\n\n* Passionate people building excellent products\n\n* Employees with a wide variety of interests\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, Scala and Apache jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.\n\nElastic is building out our Elastic Cloud Team focusing on Elastic as a Service. This is a great opportunity to help lead our Cloud efforts and make an immediate impact to our strategy and implementation.\n\nOur cloud product allows users to create new clusters or expand existing ones easily This product would be built on technologies such as OpenStack, AWS, Docker, and others to enable the Operations Teams to easily create and manage multiple Elastic Clusters.\n\nWhat You Will Do:\n\n\n* Continuously integrate and automate deployment of latest Elastic stack versions to the cloud infrastructure\n\n* Build and manage docker images for Elastic Stack components.\n\n* Work closely with the core infrastructure and SRE team to ship new features in our SaaS platform.\n\n\n\n\nWhat You Bring Along:\n\n\n* CS or MS Degree in Computer Science\n\n* 5+ years of object-oriented development with at least one of: Scala, Python, Java\n\n* Experience building CI/CD infrastructure using Docker.\n\n* Experience or ability to respond to operational issues\n\n* A self starter who with experience working across multiple technical teams and decision makers\n\n\n\n\nAdditional Information:\n\nWe're looking to hire team members invested in realizing the goal of making real-time data exploration easy and available to anyone. As a distributed company, we believe that diversity drives our vibe! Whether you're looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life.\n\n\n\n* Competitive pay based on the work you do here and not your previous salary\n\n* Stock options\n\n* Global minimum of 16 weeks of paid in full parental leave (moms & dads)\n\n* Generous vacation time and one week of volunteer time off\n\n* Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.\n\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, Docker, SaaS and Apache jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.