\nAbout Zeotap\n\n\nFounded in Berlin in 2014, Zeotap started with a mission to provide high-quality data to marketers. As we evolved, we recognized a greater challenge: helping brands create personalized, multi-channel experiences in a world that demands strict data privacy and compliance. This drive led to the launch of Zeotapโs Customer Data Platform (CDP) in 2020โa powerful, AI-native SaaS suite built on Google Cloud that empowers brands to unlock and activate customer data securely.\n\n\nToday, Zeotap is trusted by some of the worldโs most innovative brands, including Virgin Media O2, Amazon, and Audi, to create engaging, data-driven customer experiences that drive better business outcomes across marketing, sales, and service. With an unique background in high-quality data solutions, Zeotap is a leader in the European CDP market, empowering enterprises with a secure, privacy-first solution to harness the full potential of their customer data.\n\n\n\nResponsibilities:\n* You will design and implement robust, scalable, and high-performance data pipelines using Spark, Scala, and Airflow with familiarity on Google Cloud.\n* You develop, construct, test, and maintain architectures such as databases and large-scale processing systems.\n* You will assemble large, complex data sets that meet functional and non-functional business requirements.\n* You build the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from various data sources.\n* You will collaborate with data scientists and other stakeholders to improve data models that drive business processes.\n* You implement data flow and tracking using Apache Airflow.\n* You ensure data quality and integrity across various data processing stages.\n* You monitor and optimize the performance of the data processing pipeline.\n* You will tune and optimize Spark jobs for performance and efficiency, ensuring they run effectively on large-scale data sets.\n* You troubleshoot and resolve issues related to data pipelines and infrastructure.\n* You stay up-to-date with the latest technologies and best practices in Big Data and data engineering.\n* You adhere to Zeotapโs company, privacy and information security policies and procedures\n* You complete all the awareness trainings assigned on time\n\n\n\n\nRequirements:\n* 2+ years of experience in building and deploying high scale solutions\n* Must have very good problem-solving skills and clear fundamentals of DS and algorithms\n* Expert coding skills in Java or Scala\n* Expert coding skills in Go or Python is a huge plus\n* Apache Spark or other Bigdata stack experience is a mandatory\n* High level and low-level design skills.\n* Deep understanding of any OLTP, OLAP, NoSQL or Graph databases is a huge plus.\n* Deep knowledge of distributed systems and design is a huge plus\n* Hands-on with Streaming technologies like Kafka, Flink, Samza etc is a huge plus\n* Good knowledge of scalable technologies involving Bigdata technologies \n* Bachelor or Masterโs degree in information systems, computer science or other related fields is preferred \n\n\n\n\nWhat we offer:\n* Competitive compensation and attractive perks\n* Health Insurance coverage \n* Flexible working support, guidance and training provided by a highly experienced team\n* Fast paced work environment\n* Work with very driven entrepreneurs and a network of global senior investors across telco, data, advertising, and technology\n\n\n\n\n\n\nZeotap welcomes all โ we are equal employment opportunity & affirmative action employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.\n \nInterested in joining us?\n \nWe look forward to hearing from you! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$52,500 — $92,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nBengaluru, Karnataka
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Government Employees Insurance Company is hiring a
Remote Staff Engineer
Our Senior Staff Engineer works with our Staff and Sr. Engineers to innovate and build new systems, improve and enhance existing systems and identify new opportunities to apply your knowledge to solve critical problems. You will lead the strategy and execution of a technical roadmap that will increase the velocity of delivering products and unlock new engineering capabilities. The ideal candidate is a self-starter that has deep technical expertise in their domain. Position Responsibilities As a Senior Staff Engineer, you will: Provide technical leadership to multiple areas and provide technical and thought leadership to the enterprise Collaborate across team members and across the tech organization to solve our toughest problems Develop and execute technical software development strategy for a variety of domains Accountable for the quality, usability, and performance of the solutions Utilize programming languages like C#, Java, Python or other object-oriented languages, SQL, and NoSQL databases, Container Orchestration services including Docker and Kubernetes, and a variety of Azure tools and services Be a role model and mentor, helping to coach and strengthen the technical expertise and know-how of our engineering and product community. Influence and educate executives Consistently share best practices and improve processes within and across teams Analyze cost and forecast, incorporating them into business plans Determine and support resource requirements, evaluate operational processes, measure outcomes to ensure desired results, and demonstrate adaptability and sponsoring continuous learning Qualifications Exemplary ability to design, perform experiments, and influence engineering direction and product roadmap Experience partnering with engineering teams and transferring research to production Extensive experience in leading and building full-stack application and service development, with a strong focus on SAAS products / platforms. Proven expertise in designing and developing microservices using C#, gRPC, Python, Django, Kafka, and Apache Spark, with a deep understanding of both API and event-driven architectures. Proven experience designing and delivering highly-resilient event-driven and messaging based solutions at scale with minimal latency. Deep hands-on experience in building complex SAAS systems in large scale business focused systems, with great knowledge on Docker and Kubernetes Fluency and Specialization with at least two modern OOP languages such as C#, Java, C++, or Python including object-oriented design Great understanding of open-source databases like MySQL, PostgreSQL, etc. And strong foundation with No-SQL databases like Cosmos, Cassandra. Apache Trino etc. In-depth knowledge of CS data structures and algorithms Ability to excel in a fast-paced, startup-like environment Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication) Experience with Micro-services oriented architecture and extensible REST APIs Experience building the architecture and design (architecture, design patterns, reliability, and scaling) of new and current systems Experience in implementing security protocols across services and products: Understanding of Active Directory, Windows Authentication, SAML, OAuth Fluency in DevOps Concepts, Cloud Architecture, and Azure DevOps Operational Framework Experience in leveraging PowerShell scripting Experience in existing Operational Portals such as Azure Portal Experience with application monitoring tools and performance assessments Experience in Azure Network (Subscription, Security zoning, etc.) Experience 10+ years full-stack development experience (C#/Java/Python/GO), with expertise in client-side and server-side frameworks. 8+ years of experience with architecture and design 6+ years of experience in open-source frameworks 4+ years of experience with AWS, GCP, Azure, or another cloud service Education Bachelorโs degree in Computer Science, Information Systems, or equivalent education or work experience Annual Salary $115,000.00 - $260,000.00 The above annual salary range is a general guideline. Multiple factors are taken into consideration to arrive at the final hourly rate/ annual salary to be offered to the selected candidate. Factors include, but are not limited to, the scope and responsibilities of the role, the selected candidateโs work experience, education and training, the work location as well as market and business considerations. At this time, GEICO will not sponsor a new applicant for employment authorization for this position. Benefits: As an Associate, youโll enjoy our Total Rewards Program* to help secure your financial future and preserve your health and well-being, including: Premier Medical, Dental and Vision Insurance with no waiting period** Paid Vacation, Sick and Parental Leave 401(k) Plan Tuition Reimbursement Paid Training and Licensures *Benefits may be different by location. Benefit eligibility requirements vary and may include length of service. **Coverage begins on the date of hire. Must enroll in New Hire Benefits within 30 days of the date of hire for coverage to take effect. The equal employment opportunity policy of the GEICO Companies provides for a fair and equal employment opportunity for all associates and job applicants regardless of race, color, religious creed, national origin, ancestry, age, gender, pregnancy, sexual orientation, gender identity, marital status, familial status, disability or genetic information, in compliance with applicable federal, state and local law. GEICO hires and promotes individuals solely on the basis of their qualifications for the job to be filled. GEICO reasonably accommodates qualified individuals with disabilities to enable them to receive equal employment opportunity and/or perform the essential functions of the job, unless the accommodation would impose an undue hardship to the Company. This applies to all applicants and associates. GEICO also provides a work environment in which each associate is able to be productive and work to the best of their ability. We do not condone or tolerate an atmosphere of intimidation or harassment. We expect and require the cooperation of all associates in maintaining an atmosphere free from discrimination and harassment with mutual respect by and for all associates and applicants. For more than 75 years, GEICO has stood out from the rest of the insurance industry! We are one of the nation's largest and fastest-growing auto insurers thanks to our low rates, outstanding service and clever marketing. We're an industry leader employing thousands of dedicated and hard-working associates. As a wholly owned subsidiary of Berkshire Hathaway, we offer associates training and career advancement in a financially stable and rewarding workplace. Opportunities for Students & Grads Learn more about GEICO Learn more about GEICO Diversity and Inclusion Learn more about GEICO Benefits \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Docker, DevOps, Education, Cloud, API, Senior and Engineer jobs that are similar:\n\n
$47,500 — $97,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMD Chevy Chase (Office) - JPS
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nJoin SADA India as a Senior Data Engineer, Enterprise Support service!\n\nYour Mission \n\nAs a Sr. Data Engineer on the Enterprise Support service team at SADA, you will reduce customer anxiety about running production workloads in the cloud by implementing and iteratively improving observability and reliability. You will have the opportunity to engage with our customers in a meaningful way by defining, measuring, and improving key business metrics; eliminating toil through automation; inspecting code, design, implementation, and operational procedures; enabling experimentation by helping create a culture of ownership; and winning customer trust through education, skill sharing, and implementing recommendations. Your efforts will accelerate our customersโ cloud adoption journey and we will be with them through the transformation of their applications, infrastructure, and internal processes. You will be part of a new social contract between customers and service providers that demands shared responsibility and accountability: our partnership with our customers will ensure we are working towards a common goal and share a common fate.\n\nThis is primarily a customer-facing role. You will also work closely with SADAโs Customer Experience team to execute their recommendations to our customers, and with Professional Services on large projects that require PMO support.\n\nPathway to Success \n\n#MakeThemRave is at the foundation of all our engineering. Our motivation is to provide customers with an exceptional experience in migrating, developing, modernizing, and operationalizing their systems in the Google Cloud Platform.\n\nYour success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured bi-yearly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.\n\nAs you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks.\n\nExpectations\n\nCustomer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives.\n\nOnboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals.\n\nJob Requirements\n\nRequired Credentials:\n\n\n* Google Professional Data Engineer Certified or able to complete within the first 45 days of employment \n\n* A secondary Google Cloud certification in any other specialization\n\n\n\n\nRequired Qualifications: \n\n\n* 4+ years of experience in Cloud support\n\n* Experience in supporting customers preferably in 24/7 environments\n\n* Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)\n\n* Experience writing software in at least two or more languages such as Python, Java, Scala, or Go\n\n* Experience in building production-grade data solutions (relational and NoSQL)\n\n* Experience with systems monitoring/alerting, capacity planning, and performance tuning\n\n* Experience with BI tools like Tableau, Looker, etc will be an advantage\n\n* Consultative mindset that delights the customer by building good rapport with them to fully understand their requirements and provide accurate solutions\n\n\n\n\nUseful Qualifications:\n\n\n* \n\n\n* Mastery in at least one of the following domain areas:\n\n\n\n\n\n* \n\n\n* Google Cloud DataFlow: building batch/streaming ETL pipelines with frameworks such as Apache Beam or Google Cloud DataFlow and working with messaging systems like Pub/Sub, Kafka, and RabbitMQ; Auto scaling DataFlow clusters, troubleshooting cluster operation issues\n\n* Data Integration Tools: building data pipelines using modern data integration tools such as Fivetran, Striim, Data Fusion, etc. Must have hands-on experience configuring and integrating with multiple Data Sources within and outside of Google Cloud\n\n* Large Enterprise Migration: migrating entire cloud or on-prem assets to Google Cloud including Data Lakes, Data Warehouses, Databases, Business Intelligence, Jobs, etc. Provide consultations for optimizing cost, defining methodology, and coming up with a plan to execute the migration.\n\n\n\n\n\n\n\n\n\n* Experience with IoT architectures and building real-time data streaming pipelines\n\n* Experience operationalizing machine learning models on large datasets\n\n* Demonstrated leadership and self-direction -- a willingness to teach others and learn new techniques\n\n* Demonstrated skills in selecting the right statistical tools given a data analysis problem\n\n* Understanding of Chaos Engineering\n\n* Understanding of PCI, SOC2, and HIPAA compliance standards\n\n* Understanding of the principle of least privilege and security best practices\n\n* Understanding of cryptocurrency and blockchain technology\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nThiruvananthapuram, Kerala, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Senior AI Infra Engineer AI ML and Data Infrastructure
The Team\n\nAcross our work in Science, Education, and within our communities, we pair technology with grantmaking, impact investing, and collaboration to help accelerate the pace of progress toward our mission. Our Central team provides the support needed to push this work forward. \n\nThe Central team at CZI consists of our Finance, People & DEI, Real Estate, Events, Workplace, Facilities, Security, Brand & Communications, Business Systems, Central Operations, Strategic Initiatives, and Ventures teams. These teams provide strategic support and operational excellence across the board at CZI.\nThe Opportunity\n\nBy pairing engineers with leaders in our science and education teams, we can bring AI/ML technology to the table in new ways to help drive solutions. We are uniquely positioned to design, build, and scale software systems to help educators, scientists, and policy experts better address the myriad challenges they face. Our technology team is already helping schools bring personalized learning tools to teachers and schools across the country. We are also supporting scientists around the world as they develop a comprehensive reference atlas of all cells in the human body, and are developing the capacity to apply state-of-the-art methods in artificial intelligence and machine learning to solve important problems in the biomedical sciences. \n\nThe AI/ML and Data Engineering Infrastructure organization works on building shared tools and platforms to be used across all of the Chan Zuckerberg Initiative, partnering and supporting the work of a wide range of Research Scientists, Data Scientists, AI Research Scientists, as well as a broad range of Engineers focusing on Education and Science domain problems. Members of the shared infrastructure engineering team have an impact on all of CZI's initiatives by enabling the technology solutions used by other engineering teams at CZI to scale. A person in this role will build these technology solutions and help to cultivate a culture of shared best practices and knowledge around core engineering.\nWhat You'll Do\n\n\n* Participate in the technical design and building of efficient, stable, performant, scalable and secure AI/ML and Data infrastructure engineering solutions.\n\n* Active hands-on coding working on our Deep Learning and Machine Learning models\n\n* Design and implement complex systems integrating with our large scale AI/ML GPU compute infrastructure and platform, making working across multiple clouds easier and convenient for our Research Engineers, ML Engineers, and Data Scientists. \n\n* Use your solid experience and skills in building containerized applications and infrastructure using Kubernetes in support of our large scale GPU Research cluster as well as working on our various heterogeneous and distributed AI/ML environments. \n\n* Collaborate with other team members in the design and build of our Cloud based AI/ML platform solutions, which includes Databricks Spark, Weaviate Vector Databases, and supporting our hosted Cloud GPU Compute services running containerized PyTorch on large scale Kubernetes.\n\n* Collaborate with our partners on data management solutions in our heterogeneous collection of complex datasets.\n\n* Help build tooling that makes optimal use of our shared infrastructure in empowering our AI/ML efforts with world class GPU Compute Cluster and other compute environments such as our AWS based services.\n\n\n\nWhat You'll Bring\n\n\n* BS or MS degree in Computer Science or a related technical discipline or equivalent experience\n\n* 5+ years of relevant coding experience\n\n* 3+ years of systems Architecture and Design experience, with a broad range of experience across Data, AI/ML, Core Infrastructure, and Security Engineering\n\n* Scaling containerized applications on Kubernetes or Mesos, including expertise with creating custom containers using secure AMIs and continuous deployment systems that integrate with Kubernetes or Mesos. (Kubernetes preferred)\n\n* Proficiency with Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure, and experience with On-Prem and Colocation Service hosting environments\n\n* Proven coding ability with a systems language such as Rust,C/ C++, C#, Go, Java, or Scala\n\n* Shown ability with a scripting language such as Python, PHP, or Ruby\n\n* AI/ML Platform Operations experience in an environment with challenging data and systems platform challenges - including large scale Kafka and Spark deployments (or their coralaries such as Pulsar, Flink, and/or Ray) as well as Workflow scheduling tools such as Apache Airflow, Dagster, or Apache Beam \n\n* MLOps experience working with medium to large scale GPU clusters in Kubernetes (Kubeflow), HPC environments, or large scale Cloud based ML deployments\n\n* Working knowledge of Nvidia CUDA and AI/ML custom libraries. \n\n* Knowledge of Linux systems optimization and administration\n\n* Understanding of Data Engineering, Data Governance, Data Infrastructure, and AI/ML execution platforms.\n\n* PyTorch, Karas, or Tensorflow experience a strong nice to have\n\n* HPC with and Slurm experience a strong nice to have\n\n\n\nCompensation\n\nThe Redwood City, CA base pay range for this role is $190,000 - $285,000. New hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. Actual placement in range is based on job-related skills and experience, as evaluated throughout the interview process. Pay ranges outside Redwood City are adjusted based on cost of labor in each respective geographical market. Your recruiter can share more about the specific pay range for your location during the hiring process.\nBenefits for the Whole You \n\nWeโre thankful to have an incredible team behind our work. To honor their commitment, we offer a wide range of benefits to support the people who make all we do possible. \n\n\n* CZI provides a generous employer match on employee 401(k) contributions to support planning for the future.\n\n* Annual benefit for employees that can be used most meaningfully for them and their families, such as housing, student loan repayment, childcare, commuter costs, or other life needs.\n\n* CZI Life of Service Gifts are awarded to employees to โlive the missionโ and support the causes closest to them.\n\n* Paid time off to volunteer at an organization of your choice. \n\n* Funding for select family-forming benefits. \n\n* Relocation support for employees who need assistance moving to the Bay Area\n\n* And more!\n\n\n\nCommitment to Diversity\n\nWe believe that the strongest teams and best thinking are defined by the diversity of voices at the table. We are committed to fair treatment and equal access to opportunity for all CZI team members and to maintaining a workplace where everyone feels welcomed, respected, supported, and valued. Learn about our diversity, equity, and inclusion efforts. \n\nIf youโre interested in a role but your previous experience doesnโt perfectly align with each qualification in the job description, we still encourage you to apply as you may be the perfect fit for this or another role.\n\nExplore our work modes, benefits, and interview process at www.chanzuckerberg.com/careers.\n\n#LI-Remote \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$42,500 — $82,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRedwood City, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nCAPCO POLAND \n\n*We are looking for Poland based candidate. The job is remote but may require some business trips.\n\nJoining Capco means joining an organisation that is committed to an inclusive working environment where youโre encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. Itโs important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table โ so weโd love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help.\n\nCapco Poland is a global technology and management consultancy specializing in driving digital transformation across the financial services industry. We are passionate about helping our clients succeed in an ever-changing industry.\n\nWe also are experts in focused on development, automation, innovation, and long-term projects in financial services. In Capco, you can code, write, create, and live at your maximum capabilities without getting dull, tired, or foggy.\n\nWe're seeking a skilled Mid Big Data Engineer to join our Team. The ideal candidate will be responsible for designing, implementing and maintaining scalable data pipelines and solutions on on-prem / migration / cloud projects for large scale data processing and analytics.\n\n \n\nTHINGS YOU WILL DO\n\n\n* Design, develop and maintain robust data pipelines using Scala or Python, Spark, Hadoop, SQL for batch and streaming data processing\n\n* Collaborate with cross-functional teams to understand data requirements and design efficient solutions that meet business needs \n\n* Optimize Spark jobs and data processing workflows for performance, scalability and reliability\n\n* Ensure data quality, integrity and security throughout the data lifecycle\n\n* Troubleshoot and resolve data pipeline issues in a timely manner to minimize downtime and impact on business operations\n\n* Stay updated on industry best practices, emerging technologies, and trends in big data processing and analytics\n\n* Document, design specifications, deployment procedures and operational guidelines for data pipelines and systems\n\n* Provide technical guidance and mentorship for new joiners\n\n\n\n\n \n\nTECH STACK: Python or Scala, OOP, Spark, SQL, Hadoop\n\nNice to have: GCP, Pub/Sub, Big Query, Kafka, Juniper, Apache NiFi, Hive, Impala, Cloudera, CI/CD\n\n \n\nSKILLS & EXPERIENCES YOU NEED TO GET THE JOB DONE\n\n\n* min. 3-4 years of experience as a Data Engineer/Big Data Engineer\n\n* University degree in computer science, mathematics, natural sciences, or similar field and relevant working experience\n\n* Excellent SQL skills, including advanced concepts\n\n* Very good programming skills in Python or Scala\n\n* Experience in Spark and Hadoop\n\n* Experience in OOP\n\n* Experience using agile frameworks like Scrum\n\n* Interest in financial services and markets\n\n* Nice to have: experience or knowledge with GCP \n\n* Fluent English communication and presentation skills\n\n* Sense of humor and positive attitude\n\n\n\n\n \n\nWHY JOIN CAPCO?\n\n\n* Employment contract and/or Business to Business - whichever you prefer\n\n* Possibility to work remotely\n\n* Speaking English on daily basis, mainly in contact with foreign stakeholders and peers\n\n* Multiple employee benefits packages (MyBenefit Cafeteria, private medical care, life-insurance)\n\n* Access to 3.000+ Business Courses Platform (Udemy)\n\n* Access to required IT equipment\n\n* Paid Referral Program\n\n* Participation in charity events e.g. Szlachetna Paczka\n\n* Ongoing learning opportunities to help you acquire new skills or deepen existing expertise\n\n* Being part of the core squad focused on the growth of the Polish business unit\n\n* A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients\n\n* A work culture focused on innovation and creating lasting value for our clients and employees\n\n\n\n\n \n\nONLINE RECRUITMENT PROCESS STEPS*\n\n\n* Screening call with the Recruiter\n\n* Technical interview: first stage\n\n* Client Interview\n\n* Feedback/Offer\n\n\n\n\n*The recruitment process may be modified \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$57,500 — $92,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nKrakรณw, Lesser Poland Voivodeship, Poland
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nThe Mortgage Engineering team is seeking a highly skilled and experienced Senior Backend Engineer with a strong focus on microservices architecture to join our team. The ideal candidate will be proficient in Java, and possess in-depth knowledge of Kafka, SQS, Redis, Postgres, Grafana, and Kubernetes. You are an expert in working with and scaling event-driven systems, webhooks, RESTful APIs and solving challenges with concurrency and distributed systems. As a Senior Backend Engineer at Ocrolus, you will be responsible for designing, developing, and maintaining highly scalable and reliable backend systems. You will work closely with product managers, designers, and other engineers to ensure our services meet the highest standards of performance and reliability, specifically tailored to the needs of the mortgage industry.\n\nKey Responsibilities:\n\n\n* Design, develop, and maintain backend services and microservices architecture using Java.\n\n* Implement event-driven systems utilizing Kafka and AWS SQS for real-time data processing and messaging.\n\n* Optimize and manage in-memory data stores with Redis for high-speed caching and data retrieval.\n\n* Develop and maintain robust database solutions with Postgres, ensuring data integrity and performance with PgAnalyze.\n\n* Deploy, monitor, and manage containerized applications using Kubernetes and Terraform and ensure its scalability and resilience and our manage cloud infrastructure.\n\n* Collaborate closely with product managers and designers to understand requirements and deliver technical solutions that meet business needs.\n\n* Develop and maintain RESTful APIs and gRPC services to support seamless integration with frontend applications and third-party services.\n\n* Ensure secure and efficient authentication and authorization processes using OAuth.\n\n* Manage codebases in a monorepo environment using Bazel for build automation.\n\n* Troubleshoot and resolve client support issues in a timely manner, ensuring minimal disruption to service.\n\n* Continuously explore and implement new technologies and frameworks to improve system performance and efficiency.\n\n* Write and maintain technical documentation on Confluence to document technical plans and processes, and facilitate knowledge sharing across the team.\n\n* Mentor junior engineers and contribute to the overall growth and development of the engineering team.\n\n\n\n\nRequired Qualifications:\n\n\n* Bachelorโs or Masterโs degree in Computer Science, Engineering, or a related field.\n\n* 5+ years of professional experience in backend development with a focus on microservices.\n\n* Proficiency in Java, with a strong preference for expertise in Java and the Spring framework.\n\n* Strong experience with Apache Kafka for building event-driven architectures.\n\n* Hands-on experience with AWS SQS for message queuing and processing.\n\n* Expertise in Redis for caching and in-memory data management.\n\n* Solid understanding of Postgres or other relational databases, including performance tuning, migrations, and optimization.\n\n* Proven experience with Kubernetes for container orchestration and management.\n\n* Proficiency in developing and consuming RESTful APIs and gRPC services.\n\n* Proficiency with command line and Git for version control and Github for code reviews.\n\n* Familiarity with OAuth for secure authentication and authorization.\n\n* Strong understanding of software development best practices, including version control, testing, and CI/CD automation.\n\n* Excellent problem-solving skills and the ability to work independently and as part of a team.\n\n* Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Experience working in the mortgage and fintech industries, with a deep understanding of domain-specific challenges and B2B SaSS requirements.\n\n* Experience managing codebases in a monorepo environment with Bazel for build automation.\n\n* Understanding of security best practices and implementation in microservices.\n\n* Experience with performance monitoring and logging tools such as Grafana, Sentry, PgAnalyze, Prometheus, and New Relic.\n\n* Familiarity with cloud platforms such as AWS.\n\n* Familiarity with Python.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Redis, Java, Cloud, Git, Senior, Junior, Engineer and Backend jobs that are similar:\n\n
$65,000 — $115,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nGurgaon, Haryana, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nJoin SADA India as a Senior Data Engineer!\n\nYour Mission \n\nAs a Sr. Data Engineer on the Enterprise Support service team at SADA, you will reduce customer anxiety about running production workloads in the cloud by implementing and iteratively improving observability and reliability. You will have the opportunity to engage with our customers in a meaningful way by defining, measuring, and improving key business metrics; eliminating toil through automation; inspecting code, design, implementation, and operational procedures; enabling experimentation by helping create a culture of ownership; and winning customer trust through education, skill sharing, and implementing recommendations. Your efforts will accelerate our customersโ cloud adoption journey and we will be with them through the transformation of their applications, infrastructure, and internal processes. You will be part of a new social contract between customers and service providers that demands shared responsibility and accountability: our partnership with our customers will ensure we are working towards a common goal and share a common fate.\n\nThis is primarily a customer-facing role. You will also work closely with SADAโs Customer Experience team to execute their recommendations to our customers, and with Professional Services on large projects that require PMO support.\n\nPathway to Success \n\n#MakeThemRave is at the foundation of all our engineering. Our motivation is to provide customers with an exceptional experience in migrating, developing, modernizing, and operationalizing their systems in the Google Cloud Platform.\n\nYour success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured bi-yearly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.\n\nAs you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks.\n\nExpectations\n\nCustomer Facing - You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives.\n\nOnboarding/Training - The first several weeks of onboarding are dedicated to learning and will encompass learning materials/assignments and compliance training, as well as meetings with relevant individuals.\n\nJob Requirements\n\nRequired Credentials:\n\n\n* Google Professional Data Engineer Certified or able to complete within the first 45 days of employment \n\n* A secondary Google Cloud certification in any other specialization\n\n\n\n\nRequired Qualifications: \n\n\n* 5+ years of experience in Cloud support\n\n* Experience in supporting customers preferably in 24/7 environments\n\n* Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)\n\n* Experience writing software in at least two or more languages such as Python, Java, Scala, or Go\n\n* Experience in building production-grade data solutions (relational and NoSQL)\n\n* Experience with systems monitoring/alerting, capacity planning, and performance tuning\n\n* Experience with BI tools like Tableau, Looker, etc will be an advantage\n\n* Consultative mindset that delights the customer by building good rapport with them to fully understand their requirements and provide accurate solutions\n\n\n\n\nUseful Qualifications:\n\n\n* \n\n\n* Mastery in at least one of the following domain areas:\n\n\n\n\n\n* \n\n\n* Google Cloud DataFlow: building batch/streaming ETL pipelines with frameworks such as Apache Beam or Google Cloud DataFlow and working with messaging systems like Pub/Sub, Kafka, and RabbitMQ; Auto scaling DataFlow clusters, troubleshooting cluster operation issues\n\n* Data Integration Tools: building data pipelines using modern data integration tools such as Fivetran, Striim, Data Fusion, etc. Must have hands-on experience configuring and integrating with multiple Data Sources within and outside of Google Cloud\n\n* Large Enterprise Migration: migrating entire cloud or on-prem assets to Google Cloud including Data Lakes, Data Warehouses, Databases, Business Intelligence, Jobs, etc. Provide consultations for optimizing cost, defining methodology, and coming up with a plan to execute the migration.\n\n\n\n\n\n\n\n\n\n* Experience with IoT architectures and building real-time data streaming pipelines\n\n* Experience operationalizing machine learning models on large datasets\n\n* Demonstrated leadership and self-direction -- a willingness to teach others and learn new techniques\n\n* Demonstrated skills in selecting the right statistical tools given a data analysis problem\n\n* Understanding of Chaos Engineering\n\n* Understanding of PCI, SOC2, and HIPAA compliance standards\n\n* Understanding of the principle of least privilege and security best practices\n\n* Understanding of cryptocurrency and blockchain technology\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nThiruvananthapuram, Kerala, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Who We Are:\n\nFounded in 2018 by engineers from Stanford, Cisco Meraki, and Samsara, Spot AI is the fastest growing Video Intelligence company in the U.S. We are upending the $30 billion video surveillance market with an AI Camera System to help people at work create safer, smarter organizations. In the process, weโre disrupting video security to create a new category of Video Intelligence.\n\nWe are experiencing tremendous growth and are deployed at thousands of locations across businesses in 17 different verticals, ranging from local businesses to Fortune 500s. Our customers range from warehousing and healthcare to nonprofits and car washes including SpaceX, ExtraSpace Storage, WineDirect, YMCA, and Veg Fresh Farms.\n\nWeโve recently raised $40M Series B financing to continue to transform how organizations use their video footage. Weโre backed by Scale Venture Partners, Redpoint Ventures, Bessemer Venture Partners, StepStone Group, and MVP Ventures.\n\nWho You Are:\n\nYouโre a self-driven, highly skilled AI Infrastructure Engineer with expertise in designing, architecting, and productizing computer vision or other AI systems in cloud and on edge. You excel at building and shipping infrastructure and tooling for computer vision or other AI products in production, and bring a combination of passion and creativity along with a pragmatic approach. You have the skills to architect distributed AI systems that can handle millions of hours of customer footage. You know how to work cross functionally to meet the needs of multiple stakeholders, and you have a track record of success.\n\nNote: candidates for this role must be geographically located in the United States.\nWhat Excites You:\n\n\n\n* Working and mutually learning from an extremely high caliber engineering team\n\n* Developing real-time, high availability video intelligence systems in cloud and on edge, with lots of ownership and opportunities to implement novel solutions\n\n* Delivering video intelligence infrastructure and tools in ways that are thoughtful about end user privacy, such as edge inference and federated training\n\n* Designing, owning and scaling end-to-end edge and cloud AI platforms\n\n* Operating with autonomy in a fast paced environment\n\n* Communicating directly with customers to better understand their use-cases for iterating on future product roadmap\n\n\n\nWhat Gets Our Attention:\n\n\n* 3+ years of experience building and productizing AI infrastructure at scale both in cloud AND on edge\n\n* Track record of delivering outstanding results\n\n* Strong engineering fundamentals with various back-end technologies\n\n* Expertise leveraging leading cloud AI platforms, such as GCP, Azure, or AWS for real-time inferencing\n\n* Experience with big data processing technologies and databases, such as Apache Beam, Hadoop, Spark, Kafka, Pub/Sub, Dataflow, BigQuery or Bigtable\n\n* Experience with real-time, edge AI systems with limited memory and compute resources\n\n* Experience building MLOps tools for automated testing, training and deployment\n\n* Outstanding communication skills\n\n* Ability to prioritize and synthesize engineering tasks with first principles thinking\n\n* Ability to rapidly pick up new technologies and techniques and implement them\n\nBachelor's degree in computer science or a related field; master's degree or higher preferred\n\n\n\nWhatโs In It For You:\n\n\nBase Salary Range: $123,000 - $190,000. Offered salary will be based on the offered candidateโs demonstrated attributes and competencies related to the role, as well as the offered candidateโs geographical location within the United States. Your recruiting partner can share more details about the compensation range as it relates to your geography once your interview process has begun\n\nGenerous early stage equity\n\nMedical, dental and vision plan options \n\n401K with employer match\n\nFlexible and supportive time off practices, including self-managed PTO and a generous new parent leave policy\n\nLearning and development opportunities\n\nRemote work flexibility, including a stipend to set up your ideal home office\n\n\n\n\nWhat We Value:\n\nWe operate under a trio of company values:\n\nCustomer First, Always. We are relentlessly curious about our customerโs goals, and seek the simplest solutions to solve their problems.\n\nOwn Your Outcomes. We bias towards action, move fast, and iterate. Everyone on our team is empowered to make decisions.\n\nItโs a team effort. We help each other succeed. We leverage each otherโs strengths to accomplish big goals together.\n\n\n\nAnd, we are creating and cultivating a diverse and inclusive culture where we celebrate individuals for what they accomplish, no matter who they are! As an equal opportunity employer, we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.\n\nCome join our journey! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Video, Cloud, Senior, Excel and Engineer jobs that are similar:\n\n
$50,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for ZOE and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 1 year ago
\nWe are redefining how people approach their health\n\n\nZOE is combining scientific research at a scale never before imagined and cutting-edge AI to improve the health of millions. \n\n\nCreated by the worldโs top scientists, our personalised nutrition program is reimagining a fundamental human need โ eating well for your own body. Currently available in the US and the UK, ZOE is already helping > 100k ZOE members to adopt healthier habits and live better. Our work and expertise in biology, engineering, data science, and nutrition science have led to multiple breakthrough papers in leading scientific journals such as Nature Medicine, Science, The Lancet, and more.\n\n\nTo learn more, head to Spotify, Apple Podcasts, or Audible to listen to our Science & Nutrition Podcast (with 3 million listens!) \n\n\nA remote-first, high-growth startup, we are backed by founders, investors, and entrepreneurs who have built multi-billion dollar technology companies. We are always looking for innovative thinkers and builders to join our team on a thrilling mission to tackle epic health problems. Together, we can improve human health and touch millions of lives. \n\n\nWe value inclusivity, transparency, ownership, open-mindedness and diversity. We are passionate about delivering great results and learning in the open. We want our teams to have the freedom to make long-term, high-impact decisions, and the well-being of our teammates and the people around us is a top priority.\n\n\nCheck out what life is like for our tech team on ZOE Tech. \n\n\nWeโre looking for a Senior Data Engineer to take ZOE even further.\n\n\nAbout the team\n\n\nThe mission of the Core Science team is to transform research trials and data into personalised actionable recommendations that reach our members. We are currently developing a feedback loop to measure the efficacy of ZOE's nutrition and health advice, which will drive the evolution of our recommendations. In addition, the team is conducting supplementary studies in key areas, such as the microbiome, and constructing a platform to facilitate these trials alongside the main product. The team also maintains close collaboration with other stream-aligned teams to deliver scientific discoveries directly to the app.\n\n\nWe operate in a very dynamic and rewarding environment, where we work closely with all sorts of stakeholders to find the best solutions for both the business and our potential customers. Our agile, cross-functional teams use continuous delivery and regular feedback to ensure we deliver value to our customers on a daily basis. Our systems make use of Python, dbt, Apache Airflow, Kotlin, Typescript, React, and FastAPI. We deploy and operate our software using Kubernetes and ML models using VertexAI in GCP. \n\n\nAbout the role\n\n\nAs a Senior data engineer in the Core Science team, you will be working with scientists, data scientists and other engineers to build a platform that empowers our team to conduct scientific research trials and improve the efficacy of ZOEโs nutrition and health advice. Every line of code you write will be a catalyst for groundbreaking discoveries.\n\n\nIn this role, you will also have the opportunity to make a significant impact on the personal and professional development of our team by providing guidance, support, and expertise. You will play a crucial role in helping individuals achieve their goals, overcome challenges, and maximise their potential.\n\n\n\nYou'll be\n* Defining the data requirements from the research trials that the core Science team will run alongside data coming from the main product experience.\n* Automating data collection from a variety of sources (e.g. labs, questionnaires, study coordination tools).orchestrating integration of data derived from these trials into our data warehouse. \n* Coordinating with different product teams to ensure a seamless App experience for both study participants and paid customers. \n* Ensuring consistency and accuracy of all study data used for research and product development. \n* Conducting exploratory data analysis to understand data patterns and trends. \n* Creating algorithms and ML models when necessary. \n* Ensuring data security and compliance with regulatory standards.\n* Ensuring data accessibility to internal and external stakeholders with up-to-date documentation on data sources and schemes. \n\n\n\nWe think youโll be great if you...\n* +6 years of experience in data engineering roles, with a proven track record of working on data integration, ETL processes, and data warehousing \n* Are proficient in Python and SQL and have experience with Data Warehouses (ex: BigQuery, SnowFlake) and interactive computing environments like Jupyter Notebooks.\n* Have knowledge of data governance principles and best practices for ensuring data quality, security, and compliance with regulatory standards.\n* Are detail-oriented and data-savvy to ensure the accuracy and reliability of the data. \n* Are someone who strives to keep their code clean, tests complete and maintained, and their releases frequent.\n* Have experience with cloud platforms like Google Cloud Platform (GCP) and platforms to schedule and monitor data workflows like Apache Airflow.\n* Have a solid understanding of best practices around CI/CD, containers and what a great release process looks like. \n* Have the ability to collaborate effectively with cross-functional teams and communicate technical concepts to non-technical stakeholders.\n* Have a mindset of collaboration, innovation, and a passion for contributing to groundbreaking scientific discoveries.\n\n\n\nNice to have\n* Have experience with dbt and Apache Airflow. \n* Have experience with ML modelling and ML Ops. \n* Have experience with privacy-preserving technologies such as federated learning and data synthesis. \n\n\n\n\n\nThese are the ideal skills, attributes, and experience weโre looking for in this role. Donโt worry if you donโt tick all the boxes, especially on the skills and experience front, weโre happy to upskill for the right candidate. \n\n\nLife as a ZOEntist โ what you can expect from us:\nAs well as industry-benchmarked compensation and all the hardware and software you need, we offer a thoughtfully-curated list of benefits. We expect this list to evolve as we continue supporting our team membersโ long-term personal and professional growth, and their wellbeing. \n\n\nRemote-first: Work flexibly โ from home, our London office, or anywhere within the EU \nStock options: So you can share in our growth \nPaid time off: 28 days paid leave (25 holiday days, plus 2 company-wide reset days, and 1 โlife eventโ day) \nEnhanced Parental Leave: On top of the statutory offering\nFlexible private healthcare and life assurance options\nPension contribution: Pay monthly or top up โ your choice. \nHealth and wellbeing: Like our Employee Assistance Program and Cycle to Work Scheme\nSocial, WFH, and Growth (L&D) budgets. Plus, multiple opportunities to connect, grow, and socialise \n\n\nWeโre all about equal opportunities \nWe know that a successful team is made up of diverse people, able to be their authentic selves. To continue growing our team in the best way, we believe that equal opportunities matter, so we encourage candidates from any underrepresented backgrounds to apply for this role. You can view our Equal Opportunities statement in full here. \n\n\nA closer look at ZOE \nThink youโve heard our name somewhere before? We were the team behind the COVID Symptom Study, which has since become the ZOE Health Study (ZHS). We use the power of community science to conduct large-scale research from the comfort of contributorsโ own homes. Our collective work and expertise in biology, engineering, and data/nutrition science have led to multiple breakthrough papers in leading scientific journals such as Nature Medicine, Science, The Lancet, and more.\n\n\nSeen ZOE in the media recently? Catch our co-founder Professor Tim Spector (one of the worldโs most cited scientists) and our Chief Scientist Dr Sarah Berry on this BBC Panorama, and listen to CEO Jonathan Wolf unpack the latest in science and nutrition on our ZOE podcast. \n\n\nOh, and if youโre wondering why ZOE? It translates to โLifeโ in Greek, which weโre helping ZOE members enjoy to the fullest. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nUK/EU or compatible timezone (Remote)
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for SportyBet and want to re-open this job? Use the edit link in the email when you posted the job!
*Sporty's sites are some of the most popular on the internet, consistently staying in Alexa's list of top websites for the countries they operate in.*\n\nIn this role, youโll be responsible for developing microservices in a distributed deployment environment with an emphasis on containerisation with Docker and K8S. You wonโt just be writing simple CRUD applications, but instead will be working on the core logic of complex systems that are accessed millions of times a day. We wrote our system from scratch about 3 years ago, so youโll be working with the latest technology and wonโt have to worry about decades old legacy code.\n\nWe are hiring both Mid and Senior level Backend Engineers and a willingness to work in Springboot is fine - as long as you are willing to learn and have demonstrable experience in an object-oriented programming language.\n\n**Who We Are**\n\nSporty Group is a consumer internet and technology business with an unrivalled sports media, gaming, social and fintech platform which serves millions of daily active users across the globe via technology and operations hubs across more than 10 countries and 3 continents.\n\nThe recipe for our success is to discover intelligent and energetic people, who are passionate about our products and serving our users, and attract and retain them with a dynamic and flexible work life which empowers them to create value and rewards them generously based upon their contribution.\n\nWe have already built a capable and proven team of 300+ high achievers from a diverse set of backgrounds and we are looking for more talented individuals to drive further growth and contribute to the innovation, creativity and hard work that currently serves our users further via their grit and innovation.\n\n**Our Stack** *(we dont expect you to have all of these)*\n\nBackend Application Framework: Spring Boot (Java Config + Embedded Tomcat)\n\nMicro Service Framework: Spring Cloud Dalston (Netflix Eureka + Netflix Zuul + Netflix Ribbon + Feign)\n\nDatabase Sharding Middleware: Lede Cetus\n\nDatabase: MySQL and Oracle,Mybatis, Druid\n\nPublic Cache: AWS ElastiCache + Redis\n\nMessage Queue: Apache RocketMQ\n\nDistributed Scheduling: Dangdang Elastic Job\n\nData Index and Search: ElasticSearchLog\n\nReal-time Visualization: ElasticSearch + Logstash + Kibana\n\nBusiness Monitoring: Graphite + Grafana\n\nCluster Monitoring: Zabbix + AWS Cloudwatch\n\nTasking: Elastic Job\n\nServer: Netty \n\n**Responsibilities**\n\nDevelop highly-scalable mobile internet backends for millions of users\n\nWork with Product Owners and other development team members to determine new features and user stories needed in new / revised applications or large/complex development projects\n\nParticipate in code reviews with peers and managers to ensure that each increment adheres to original vision as described in the user story and all standard resource libraries and architecture patterns as appropriate\n\nRespond to support calls for applications in production for quick diagnosis and repair to keep things running smoothly for users\n\nParticipate in all team ceremonies including planning, grooming, product demonstration and team retrospectives\n\nMentoring less experienced team members \n\n**Requirements**\n\nExperience in Spring Boot, Spring Cloud, Spring Data and iBATIS\n\nStrong experience with highly-scalable web backends\n\nExperience designing highly transactional systems\n\nAdvanced proficiency in Object Oriented Design (OOD) and analysis\n\nAdvanced proficiency in application of analysis / design engineering functions\n\nAdvanced proficiency in application of non-functional software qualities such as resiliency and maintainability\n\nAdvanced proficiency in modern behavior-driven testing techniques\n\nDeep understanding of Microservices\n\nProficient in SQL\n\nExpert knowledge of application development with technologies like RabbitMQ, MySQL, Redis etc\n\nStrong experience with container and cloud solutions such as Docker, Kubernetes and AWS Cloud\n\nAn ability to work independently\n\nExcellent communication skills\n\nBased in Europe\n\n**Benefits**\n\nQuarterly and flash bonuses\n\nFlexible working hours\n\nTop-of-the-line equipment\n\nEducation allowance \n\nReferral bonuses \n\nAnnual company holidays - weโre still hoping to make it to Koh Samui in 2021!\n\nHighly talented, dependable co-workers in a global, multicultural organization\n\nWe score 100% on The Joel Test\n\nOur EU team is small enough for you to be impactful \n\nOur business is established and successful, offering stability and security to our employees \n\nPlease mention the words **TWO STILL PAUSE** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMTQ=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$40,000 — $90,000/year\n
\n\n#Location\nEurope
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Shopify and want to re-open this job? Use the edit link in the email when you posted the job!
**Company Description**\n\nShopify is the leading omni-channel commerce platform. Merchants use Shopify to design, set up, and manage their stores across multiple sales channels, including mobile, web, social media, marketplaces, brick-and-mortar locations, and pop-up shops. The platform also provides merchants with a powerful back-office and a single view of their business, from payments to shipping. The Shopify platform was engineered for reliability and scale, making enterprise-level technology available to businesses of all sizes. \n\n**Job Description**\n\nOur Data Platform Engineering group builds and maintains the platform that delivers accessible data to power decision-making at Shopify for over a million merchants. Weโre hiring high-impact developers across teams:\n\n* The Engine group organizes all merchant and Shopify data into our data lake in highly-optimized formats for fast query processing, and maintaining the security + quality of our datasets.\n* The Analytics group builds products that leverage the Engine primitives to deliver simple and useful products that power scalable transformation of data at Shopify in batch, or streaming, or for machine learning. This group is focused on making it really simple for our users to answer three questions: What happened in the past? What is happening now? And, what will happen in the future? \n* The Data Experiences group builds end-user experiences for experimentation, data discovery, and business intelligence reporting.\n* The Reliability group operates the data platform efficiently in a consistent and reliable manner. They build tools for other teams at Data Platform to leverage to encourage consistency and they champion reliability across the platform.\n\n**Qualifications**\n\nWhile our teams value specialized skills, they've also got a lot in common. We're looking for a(n): \n\n* High-energy self-starter with experience and passion for data and big data scale processing. You enjoy working in fast-paced environments and love making an impact. \n* Exceptional communicator with the ability to translate technical concepts into easy to understand language for our stakeholders. \n* Excitement for working with a remote team; you value collaborating on problems, asking questions, delivering feedback, and supporting others in their goals whether they are in your vicinity or entire cities apart.\n* Solid software engineer: experienced in building and maintaining systems at scale.\n\n**A Senior Data Developer at Shopify typically has 4-6 years of experience in one or more of the following areas:**\n\n* Working with the internals of a distributed compute engine (Spark, Presto, DBT, or Flink/Beam)\n* Query optimization, resource allocation and management, and data lake performance (Presto, SQL) \n* Cloud infrastructure (Google Cloud, Kubernetes, Terraform)\n* Security products and methods (Apache Ranger, Apache Knox, OAuth, IAM, Kerberos)\n* Deploying and scaling ML solutions using open-source frameworks (MLFlow, TFX, H2O, etc.)\n* Building full-stack applications (Ruby/Rails, React, TypeScript)\n* Background and practical experience in statistics and/or computational mathematics (Bayesian and Frequentist approaches, NumPy, PyMC3, etc.)\n* Modern Big-Data storage technologies (Iceberg, Hudi, Delta)\n\n**Additional information**\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous people, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.\n\n \n\nPlease mention the words **BLUSH RING CHAIR** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMTQ=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nCanada, United States
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Surge and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nPosition: Sr. Python Developer\n\n\nResponsibilities:\nThe Engineer will actively participate with Scrum development teams and meetings. Additionally, the Engineer will be responsible for working with a highly functional team developing and automating data ingest, optimizing system and search performance, integration with enterprise authentication services, and establishing/improving system monitoring while maintaining established security protocols development, test, and production systems\n• Senior Python Developer with good experience in Python, Pandas/NumPy/SciPy, RESTful/REST\n• Backend = Python\n• Frontend = Angular or React\n• Experience with node.js would be helpful \n• Expertise in at least one popular Python framework (like Django, Flask, or Tornado) and Spark/Kafka/Hadoop (plus)\n• Full Stack Engineer capable of designing solutions, writing code, testing code, automating test and deployment \n• Overall delivery of software components working in collaboration with product and design teams \n• Collaborating with other technology teams to ensure integrated end-to-end design and integration. \n• Enforcing existing process guidelines; drives new processes, guidelines, team rules, and best practices. \n• Ready, willing, and able to pick up new technologies and pitch in on story tasks (design, code, test, CI/CD, deploy etc.)\n• Ensures efficient execution of overall product delivery by prioritizing, planning and tracking sprint progress. (This can include the development of shippable code\n\nQualifications:\n• Expert with Python Development\n• 10+ years of Python Development experience \n• Bachelor/Master’s Degree in Computer science or any related quantitative field.\n• Knowledgeable in cloud platforms (preferable AWS: both traditional EC2 and serverless Lambda)\n•Deep Experience with micro-services architecture, CI/CD solutions (including Docker), DevOps principles\n• Understanding of the threading limitations of Python, and multi-process architecture\n• Solid foundation and understanding of relational and NoSQL database principles.\n• Experience working with numerical/quantitative systems, e.g., pandas, NumPy, SciPy, and Apache Spark. \n• Experience in developing and using RESTful APIs. \n• Expertise in at least one popular Python framework (like Django, Flask, or Tornado) \n• Experience in writing automated unit, integration, regression, performance, and acceptance tests. \n• Solid understanding of software design principles\n• Proven track record of executing on the full product lifecycle (inception through deprecation) to create highly scalable and flexible RESTful APIs to enable an infinite number of digital products.\n• Self-directed with a start-up/entrepreneur mindset.\n• Ravenous about learning technology and problem-solving.\n• Strong writing and communication skills. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Senior, Developer, Digital Nomad, DevOps, Serverless, Cloud, NoSQL, Angular, Engineer, Apache and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Olo and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Olo we develop an online food ordering platform used by many of the country’s largest restaurant chains, reaching millions of consumers. Chances are if you’ve ordered directly from a restaurant brand’s app or website, we’ve made that happen. Mobile ordering and payments is an exciting and active industry full of interesting players and yet still a relatively untapped market ripe for disruption. We’re quite up-front about the technical challenges our business faces. Running a platform with multiple white-labeled front-ends, that maintains real-time connections into thousands of restaurants’ POS systems, and coordinates complex transactions between these and other third parties (such as payment gateways and gift card providers) is not for the faint of heart!\n\nYou’ll be joining our Infrastructure Team, responsible for keeping Olo’s food ordering and delivery systems up and running reliably and securely.\n\nOur DevOps-oriented team uses Infrastructure-as-Code with automation wherever possible. The system has many moving parts and frequent deployments.\n\nAny engineer may work at Olo’s headquarters in New York City’s Financial District or remotely from anywhere in the U.S. In fact, more than half of our engineering team is remote!\n\nResponsibilities\n\n\n* Building virtual infrastructure in cloud platforms such as AWS using Infrastructure as Code and tools such as Terraform and Ansible\n\n* Creating and Reviewing Pull requests for configuration updates and server provisioning\n\n* Assist in designing and maintaining build pipelines for Olo applications and infrastructure\n\n* Patching operating system and security related vulnerabilities\n\n* Configuring Load Balancers, VPCs, and Security Groups\n\n* Proactively monitoring all systems for uptime and security. Noticing trends before they become issues. Improving Monitoring infrastructure as needed\n\n* Responding to PagerDuty alerts as part of an on-call rotation with another engineer.\n\n* Administering Windows and Linux Servers\n\n\n\n\nRequired Skills/Experience\n\n\n* At least 3 years of production level experience supporting a 24/7 web application in a devops role\n\n* In-depth production level experience with AWS services and APIs\n\n* Comfortable in your scripting tool of choice - PowerShell, Python, Bash etc.\n\n* Working knowledge of DevOps, automation, and Infrastructure as Code\n\n* Experience designing and creating build pipelines (TeamCity, Jenkins, Octopus)\n\n* Solid networking knowledge, VPCs, Security Groups, and capable of troubleshooting all types of network traffic issues\n\n* Experience using server provisioning software such as Ansible, Chef or Puppet\n\n* Comfort working with Linux and Windows servers\n\n* A passion for uptime\n\n\n\n\nBeneficial\n\n\n* Expert level Windows Server knowledge, including IIS, Active Directory, WSUS, SQL, Windows Server Failover Cluster and Windows Networking\n\n* In-depth knowledge of Apache Kafka\n\n* Experience using container and orchestration tools such as Docker and Kubernetes\n\n* PCI compliance experience\n\n* Started your career as a software developer\n\n\n\n\n\nAbout Olo\n\nOlo is the on-demand interface for the restaurant industry, powering digital ordering and delivery for over 250 restaurant brands. Olo’s enterprise-grade software powers every stage of the digital restaurant transaction, from fully-branded user interfaces to the back-of-house order management features that keep the kitchen running smoothly. Orders from Olo are injected seamlessly into existing restaurant systems to help brands capture demand from on-demand channels such as branded website and apps, third-party marketplaces, social media channels, and personal assistant devices like the Amazon Echo. Olo is a pioneer in the industry, beginning with text message ordering on mobile feature phones in 2005. Today, millions of consumers use Olo to order ahead (SKIP THE LINE®) or get meals delivered from the restaurants they love. Customers include Applebee’s, Chili’s, Chipotle, Denny’s, Five Guys Burgers & Fries, Jamba Juice, Noodles & Company, Red Robin, Shake Shack, sweetgreen, Wingstop, and more.\n\nOlo’s office is located at One World Trade Center in Manhattan. We offer great benefits, such as 20 days of Paid Time Off, fully paid health, dental and vision care premiums, stock options, a generous parental leave plan, and perks like FitBits, rotating craft beers on tap in our kitchen, and food events featuring our clients' menu items (now you know why we give out FitBits!). Check out our culture map: https://www.olo.com/images/culture.jpg.\n\nWe encourage you to apply!\n\nAt Olo, we know a diverse and inclusive team not only makes our products better, but our workplace better. Many groups are consistently underrepresented across the tech sector and we are fully committed in doing our part to move the needle.\n\nOlo is an equal opportunity employer and diversity is highly valued at our company. All applicants receive consideration for employment. We do not discriminate on the basis of race, religion, color, national origin, gender identity, sexual orientation, pregnancy, age, marital status, veteran status, or disability status.\n\nIf you like what you read, hear, and/or know about Olo, and want to be a part of our team, please do not hesitate to apply! We are excited to hear from you! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Engineer, Amazon, Cloud, Mobile, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Olo and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Olo we develop an online food ordering platform used by many of the country’s largest restaurant chains, reaching millions of consumers. Chances are if you’ve ordered directly from a restaurant brand’s app or website, we’ve made that happen. Mobile ordering and payments is an exciting and active industry full of interesting players and yet still a relatively untapped market ripe for disruption. We’re quite up-front about the technical challenges our business faces. Running a platform with multiple white-labeled front-ends, that maintains real-time connections into thousands of restaurants’ POS systems, and coordinates complex transactions between these and other third parties (such as payment gateways and gift card providers) is not for the faint of heart!\n\nYou’ll be joining our Infrastructure Team, responsible for keeping Olo’s food ordering and delivery systems up and running reliably and securely.\n\nOur DevOps-oriented team uses Infrastructure-as-Code with automation wherever possible. The system has many moving parts and frequent deployments.\n\nAny engineer may work at Olo’s headquarters in New York City’s Financial District or remotely from anywhere in the U.S. In fact, more than half of our engineering team is remote!\n\nResponsibilities\n\n\n* Building virtual infrastructure in cloud platforms such as AWS using Infrastructure as Code and tools such as Terraform and Ansible\n\n* Creating and Reviewing Pull requests for configuration updates and server provisioning\n\n* Assist in designing and maintaining build pipelines for Olo applications and infrastructure\n\n* Patching operating system and security related vulnerabilities\n\n* Configuring Load Balancers, VPCs, and Security Groups\n\n* Proactively monitoring all systems for uptime and security. Noticing trends before they become issues. Improving Monitoring infrastructure as needed\n\n* Responding to PagerDuty alerts as part of an on-call rotation with another engineer.\n\n* Administering Windows and Linux Servers\n\n\n\n\nRequired Skills/Experience\n\n\n* At least 3 years of production level experience supporting a 24/7 web application in a devops role\n\n* In-depth production level experience with AWS services and APIs\n\n* Comfortable in your scripting tool of choice - PowerShell, Python, Bash etc.\n\n* Working knowledge of DevOps, automation, and Infrastructure as Code\n\n* Experience designing and creating build pipelines (TeamCity, Jenkins, Octopus)\n\n* Solid networking knowledge, VPCs, Security Groups, and capable of troubleshooting all types of network traffic issues\n\n* Experience using server provisioning software such as Ansible, Chef or Puppet\n\n* Comfort working with Linux and Windows servers\n\n* A passion for uptime\n\n\n\n\nBeneficial\n\n\n* Expert level Windows Server knowledge, including IIS, Active Directory, WSUS, SQL, Windows Server Failover Cluster and Windows Networking\n\n* In-depth knowledge of Apache Kafka\n\n* Experience using container and orchestration tools such as Docker and Kubernetes\n\n* PCI compliance experience\n\n* Started your career as a software developer\n\n\n\n\n\nAbout Olo\n\nOlo is the on-demand interface for the restaurant industry, powering digital ordering and delivery for over 250 restaurant brands. Olo’s enterprise-grade software powers every stage of the digital restaurant transaction, from fully-branded user interfaces to the back-of-house order management features that keep the kitchen running smoothly. Orders from Olo are injected seamlessly into existing restaurant systems to help brands capture demand from on-demand channels such as branded website and apps, third-party marketplaces, social media channels, and personal assistant devices like the Amazon Echo. Olo is a pioneer in the industry, beginning with text message ordering on mobile feature phones in 2005. Today, millions of consumers use Olo to order ahead (SKIP THE LINE®) or get meals delivered from the restaurants they love. Customers include Applebee’s, Chili’s, Chipotle, Denny’s, Five Guys Burgers & Fries, Jamba Juice, Noodles & Company, Red Robin, Shake Shack, sweetgreen, Wingstop, and more.\n\nOlo’s office is located at One World Trade Center in Manhattan. We offer great benefits, such as 20 days of Paid Time Off, fully paid health, dental and vision care premiums, stock options, a generous parental leave plan, and perks like FitBits, rotating craft beers on tap in our kitchen, and food events featuring our clients' menu items (now you know why we give out FitBits!). Check out our culture map: https://www.olo.com/images/culture.jpg.\n\nWe encourage you to apply!\n\nAt Olo, we know a diverse and inclusive team not only makes our products better, but our workplace better. Many groups are consistently underrepresented across the tech sector and we are fully committed in doing our part to move the needle.\n\nOlo is an equal opportunity employer and diversity is highly valued at our company. All applicants receive consideration for employment. We do not discriminate on the basis of race, religion, color, national origin, gender identity, sexual orientation, pregnancy, age, marital status, veteran status, or disability status.\n\nIf you like what you read, hear, and/or know about Olo, and want to be a part of our team, please do not hesitate to apply! We are excited to hear from you! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Engineer, Amazon, Cloud, Mobile, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to solve the world's data problems with products that delight and inspire. As the company behind the popular open source projects — Elasticsearch, Kibana, Logstash, and Beats — we help people around the world do great things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. The Elastic family unites employees across 30+ countries into one coherent team, while the broader community spans across over 100 countries.\n\nWe are looking for senior security engineers to be part of a team focused on implementing, improving and maintaining security for Elastic Cloud while enabling our team to grow and succeed. While not required, US is preferred.\n\nWhat You Will Be Doing:\n\n\n* Designing, engineering, and implementing security solutions for a highly complex public & private cloud environments.\n\n* Collaboration with talented engineers and developers to deploy into new environments with security in mind.\n\n* Working within a fast paced and open environment with tools like Git, Terraform, Ansible, and Rundeck.\n\n* Automation, automation, automation, if you do it more than twice, automate it!\n\n\n\n\nWhat You Bring Along:\n\n\n* Senior level experience in public cloud provider environments such as AWS, GCP, Azure, SoftLayer, or OpenStack along with hands on experience with Terraform or Cloudformations\n\n* A deep understanding of Linux systems hardening, containerization, and cloud security controls\n\n* The Ability to drive decisions and being hands-on\n\n* Excellent verbal and written interpersonal skills, a phenomenal teammate with strong analytical, problem solving, debugging and troubleshooting skills\n\n\n\n\nBonus Points:\n\n\n* You've built and or maintained tools to get the job done and aren't afraid to use open source solutions\n\n* You love to learn new things and strive to continuously learn and challenge yourself and others\n\n* Knowledge of or experience with networking concepts in a large cloud based environment\n\n* Experience with or exposure to compliances (FedRAMP, SOC-2, PCI, ISO 27k, GDPR)\n\n* Exposure to other areas of such as application security, Governance, Risk & Controls, and development\n\n\n\n\nAdditional Information:\n\nWe're looking to hire team members invested in realizing the goal of making real-time data exploration easy and available to anyone. As a distributed company, we believe that diversity drives our vibe! We'd love to expand the doc guild outside of North America. Whether you're ready to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life.\n\n\n* Competitive pay based on the work you do here and not your previous salary\n\n* Equity\n\n* Global minimum of 16 weeks of paid in full parental leave (moms & dads)\n\n* Generous vacation time and one week of volunteer time off\n\n* Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to InfoSec, Engineer, Cloud, Senior, Apache and Linux jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for PhishLabs and want to re-open this job? Use the edit link in the email when you posted the job!
\nThe Opportunity\n\nYou will be the lead over our infrastructure team running large scale threat detection, monitoring, and training products, as well as systems used by the PhishLabs Security Operations Center and RAID teams as they mitigate existing threats. Using your technical skills as a Cloud Systems Engineer, and can-do attitude, you will work as a member of an agile Kanban team to deliver highly secure, fault-tolerant, scalable cloud-based network, computer, database, and storage solutions while fighting back against cybercrime.\n\nHow you will impact PhishLabs and our clients:\n\n\n* Design highly available, scalable, and maintainable infrastructure that supports business objectives\n\n* Implement software and systems for the diverse and constantly evolving landscape of detecting, analyzing, and monitoring cyber security threats\n\n* Work on problems of diverse scope requiring analysis of sometimes complex contributing factors to quickly diagnose root causes and solve them\n\n* Periodically provide on-call technical support for unexpected issues affecting customers\n\n* Network with internal individuals with varying degrees of technical skills on your area of expertise\n\n\n\n\n What you NEED to succeed\n\n\n* 5+ years of cloud infrastructure administration/support experience with a progression of increasing scope and complexity of work\n\n* Bachelor’s degree in Computer Science or a related field preferred\n\n* Skilled with virtualized environments, containerization, and auto-scaling (e.g. Vagrant, Docker, Kubernetes, CloudFormation)\n\n* Advanced knowledge of configuration management tooling (e.g. Chef, Puppet, Ansible, Salt)\n\n* Thorough understanding of TCP/IP networking and application protocols (e.g. DNS, FTP, HTTP, LDAP, SMTP, etc.)\n\n* Experience managing SQL and NoSQL databases including MySQL, Amazon Aurora, Postgres, MongoDb\n\n* Experienced with infrastructure/application monitoring and alerting utilizing tools like Nagios, CloudWatch, Sentry\n\n* Proven experience effectively managing shared secrets, credentials, and sensitive data (PII, PCI, HIPPA, FERPA, passwords, certificates)\n\n* Experienced with access control management using 2FA, SSO, LDAP, OAUTH2, SAML\n\n* Proficiency writing Bash, Python, or Ruby scripts and managing code using Git/Bitbucket\n\n* Mastery of Linux with a Red Hat distribution (RedHat, Amazon, Linux, CentOS, Fedora)\n\n* Experience managing application servers such as Apache, Tomcat, Nginx, Elasticsearch/Kibana, Apache Solr\n\n* Experience with email administration using Postfix, Sendmail, Amazon SES\n\n\n\n\nWhat helps you stand out:\n\n\n* Experience with DevOps CI/CD delivery pipelines utilizing Jenkins\n\n* Certified as an AWS Solutions Architect\n\n* Red Hat Certification\n\n* An information security certification (SANS GIAC ISC2, OCSP, CEH, CompTIA)\n\n* Automation Certification (i.e. Puppet, Chef, Ansible, Salt)\n\n* Experience with Office365 administration\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Engineer, DevOps, Amazon, Cloud, NoSQL, Ruby, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Compucorp and want to re-open this job? Use the edit link in the email when you posted the job!
\nWe like to build stuff… sometimes complicated stuff… for Non Profits, Charities and Art Galleries. We’re on the lookout for a confident and ambitious Devops / Sysadmin to join our fast growing startup to help us automate our development processes and build our first SaaS product!\n\nYou'll be:\n\n\n* Developing our Ansible suite to automate more tasks such as security and deployment releases, site updates and server maintenance\n\n* Maintaining and developing our Docker containers for developer environments and testing environments\n\n* Architecting a robust SaaS platform for a really significant Drupal project for the not for profit sector\n\n* Integrating with Slack/Hipchat/Stride and developing our inhouse chatbot (Compubot!) (or similar)\n\n* Working closely with the development team to develop command line tools to automate their processes\n\n\n\n\nIf you live and breath Linux and love reconfiguring Web servers for the LEMP/LAMP stack until they are as secure as a rock but would also enjoy the challenge of developing a new SaaS platform on AWS then we would love to hear from you.\n\nSkills and requirements\n\nWe are looking for someone who has:\n\n\n* Excellent Linux administration skills.\n\n* PHP Web server experience (NGINX preferable, Apache useful).\n\n* Some DB admin experience (Mysql or equivalent).\n\n* Experience with cloud technologies AWS/Open stack.\n\n* Previous experience with CI/DevOps tools platforms (i.e. Jenkins, Chef, Puppet, Ansible).\n\n* Good Git skills (comfortable preparing release branches).\n\n* Previous PHP deployment automation desirable.\n\n* Some experience/Knowledge in monitoring tools.\n\n* Strong collaboration, written and verbal skills.\n\n* Love creating awesome documentation!\n\n\n\n\nWhilst we'd love to find someone who has all the skills we need, whats most important to us is finding a member of the team who can grasp concepts quickly, has great attention to detail, and most importantly loves a tech challenge. In short, we’re a team of self confessed geeks who love tinkering and tweaking until something is just right and are looking for someone who thinks and acts like us.\n\nLocation\n\nAt Compucorp we are a distributed team and we welcome people to join us from all around the world, however as a UK based Company some overlap with UK hours will be required. By default you should assume that your working hours would be +/- 3h max from GMT start times of 9.00AM UK start time. There is some flexibility in this but please be aware that staff members starting outside of these hours are rare and as such if you would expect these hours to be an issue for you then we would not suggest that you apply.\n\nFor the Devops role, due to EU data protection requirements, the team member must be from either an EEA country or a country that the EU considers to have adequate level of data protection. The list can be found at the link below:\n\nhttps://ico.org.uk/for-organisations/guide-to-data-protection/principle-8-international#eea-countries \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Admin, Senior, Engineer, Sys Admin, Cloud, Chatbot, PHP, Git, Drupal, SaaS, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Compucorp and want to re-open this job? Use the edit link in the email when you posted the job!
## **Can you help us automate our infrastructure and build our first SaaS product?**\n\nWe like to build stuffโฆ sometimes complicated stuffโฆ for Non Profits, Charities and Art Galleries. Weโre on the lookout for a confident and ambitious Devops / Sysadmin to join our fast growing startup to help us automate our development processes and build our first SaaS product!\n\nYou'll be:\n- Developing our Ansible suite to automate more tasks such as security and deployment releases, site updates and server maintenance\n- Maintaining and developing our Docker containers for developer environments and testing environments\n- Architecting a robust SaaS platform for a really significant Drupal project for the not for profit sector\n- Integrating with Slack/Hipchat/Stride and developing our inhouse chatbot (Compubot!) (or similar)\n- Working closely with the development team to develop command line tools to automate their processes\nIf you live and breath Linux and love reconfiguring Web servers for the LEMP/LAMP stack until they are as secure as a rock but would also enjoy the challenge of developing a new SaaS platform on AWS then we would love to hear from you.\n\n**Skills and requirements**\n\nWe are looking for someone who has:\n- Excellent Linux administration skills.\n- PHP Web server experience (NGINX preferable, Apache useful).\n- Some DB admin experience (Mysql or equivalent).\n- Experience with cloud technologies AWS/Open stack.\n- Previous experience with CI/DevOps tools platforms (i.e. Jenkins, Chef, Puppet, Ansible).\n- Good Git skills (comfortable preparing release branches).\n- Previous PHP deployment automation desirable.\n- Some experience/Knowledge in monitoring tools.\n- Strong collaboration, written and verbal skills.\n- Love creating awesome documentation!\n\nWhilst we'd love to find someone who has all the skills we need, whats most important to us is finding a member of the team who can grasp concepts quickly, has great attention to detail, and most importantly loves a tech challenge. In short, weโre a team of self confessed geeks who love tinkering and tweaking until something is just right and are looking for someone who thinks and acts like us.\n\n**Location**\n\nAt Compucorp we are a distributed team and we welcome people to join us from all around the world, however as a UK based Company some overlap with UK hours will be required. By default you should assume that your working hours would be +/- 3h max from GMT start times of 9.00AM UK start time. There is some flexibility in this but please be aware that staff members starting outside of these hours are rare and as such if you would expect these hours to be an issue for you then we would not suggest that you apply.\n\nFor the Devops role, due to EU data protection requirements, the team member must be from either an EEA country or a country that the EU considers to have adequate level of data protection. The list can be found at the link below:\nhttps://ico.org.uk/for-organisations/guide-to-data-protection/principle-8-international#eea-countries \n\nPlease mention the words **ALWAYS EARN SPORT** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMTQ=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, PHP, Admin, Senior, Engineer, Linux, Sys Admin, Cloud, Chatbot, Git, Drupal, SaaS and Apache jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ Distributed team\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.