Remote Senior Site Reliability Engineer ML Platforms
Are you passionate about building and maintaining large-scale production systems that support advanced data science and machine learning applications? Do you want to join a team at the heart of NVIDIA's data-driven decision-making culture? If so, we have a great opportunity for you! NVIDIA is seeking a Senior Site Reliability Engineer (SRE) for the Data Science & ML Platform(s) team. The role involves designing, building, and maintaining services that enable real-time data analytics, streaming, data lakes, observability and ML/AI training and inferencing. The responsibilities include implementing software and systems engineering practices to ensure high efficiency and availability of the platform, as well as applying SRE principles to improve production systems and optimize service SLOs. Additionally, collaboration with our customers to plan implement changes to the existing system, while monitoring capacity, latency, and performance is part of the role. To succeed in this position, a strong background in SRE practices, systems, networking, coding, capacity management, cloud operations, continuous delivery and deployment, and open-source cloud enabling technologies like Kubernetes and OpenStack is required. Deep understanding of the challenges and standard methodologies of running large-scale distributed systems in production, solving complex issues, automating repetitive tasks, and proactively identifying potential outages is also necessary. Furthermore, excellent communication and collaboration skills, and a culture of diversity, intellectual curiosity, problem solving, and openness are essential. As a Senior SRE at NVIDIA, you will have the opportunity to work on innovative technologies that power the future of AI and data science, and be part of a dynamic and supportive team that values learning and growth. The role provides the autonomy to work on meaningful projects with the support and mentorship needed to succeed, and contributes to a culture of blameless postmortems, iterative improvement, and risk-taking. If you are seeking an exciting and rewarding career that makes a difference, we invite you to apply now! What youโll be doing: Develop software solutions to ensure reliability and operability of large-scale systems supporting machine-critical use cases. Gain a deep understanding of our system operations, scalability, interactions, and failures to identify improvement opportunities and risks. Create tools and automation to reduce operational overhead and eliminate manual tasks. Establish frameworks, processes, and standard methodologies to enhance operational maturity, team efficiency, and accelerate innovation. Define meaningful and actionable reliability metrics to track and improve system and service reliability. Oversee capacity and performance management to facilitate infrastructure scaling across public and private clouds globally. Build tools to improve our service observability for faster issue resolution. Practice sustainable incident response and blameless postmortems What we need to see: Minimum of 10 years of experience in SRE, Cloud platforms, or DevOps with large-scale microservices in production environments. Master's or Bachelor's degree in Computer Science or Electrical Engineering or CE or equivalent experience. Strong understanding of SRE principles, including error budgets, SLOs, and SLAs. Proficiency in incident, change, and problem management processes. Skilled in problem-solving, root cause analysis, and optimization. Experience with streaming data infrastructure services, such as Kafka and Spark. Expertise in building and operating large-scale observability platforms for monitoring and logging (e.g., ELK, Prometheus). Proficiency in programming languages such as Python, Go, Perl, or Ruby. Hands-on experience with scaling distributed systems in public, private, or hybrid cloud environments. Experience in deploying, supporting, and supervising services, platforms, and application stacks. Ways to stand out from the crowd: Experience operating large-scale distributed systems with strong SLAs. Excellent coding skills in Python and Go and extensive experience in operating data platforms. Knowledge of CI/CD systems, such as Jenkins and GitHub Actions. Familiarity with Infrastructure as Code (IaC) methodologies and tools. Excellent interpersonal skills for identifying and communicating data-driven insights. NVIDIA leads the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous cars. NVIDIA is looking for exceptional people like you to help us accelerate the next wave of artificial intelligence. The base salary range is 224,000 USD - 425,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. NVIDIA is the world leader in accelerated computing. NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society. Learn more about NVIDIA. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $135,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nUS, CA, Santa Clara
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nLivePerson (NASDAQ: LPSN) is the global leader in enterprise conversations. Hundreds of the worldโs leading brands โ including HSBC, Chipotle, and Virgin Media โ use our award-winning Conversational Cloud platform to connect with millions of consumers. We power nearly a billion conversational interactions every month, providing a uniquely rich data set and safety tools to unlock the power of Conversational AI for better customer experiences. \n\nAt LivePerson, we foster an inclusive workplace culture that encourages meaningful connection, collaboration, and innovation. Everyone is invited to ask questions, actively seek new ways to achieve success, nd reach their full potential. We are continually looking for ways to improve our products and make things better. This means spotting opportunities, solving ambiguities, and seeking effective solutions to the problems our customers care about.\n\nWe are a remote organization, but we value the opportunity to bring our teams together for offsite events, team collaboration, and training sessions. Therefore, we are looking for candidates who are based in Poland and are willing to come into the office for these activities. If this aligns with your situation and preferences, we encourage you to apply for this role.\n\nOverview:\n\nOur global DevOps team is growing rapidly, requiring a manager to lead and collaborate across the US, EMEA, and APAC regions to support our datacenter and cloud environments. This team focuses on the stability and reliability of our global infrastructure by leveraging existing standards, processes, and automation solutions. The DevOps Engineer Manager will oversee a team of engineers, serve as a domain expert in networking technologies, and support both datacenter and cloud infrastructure.\n\nYou will:\n\n\n* Lead and manage projects, changes, and incident resolutions.\n\n* Make strong decisions in high-pressure, fast-paced environments.\n\n* Develop, maintain, and support data center and cloud technologies across our global infrastructure, including routing, switching, and cloud technologies.\n\n* Oversee the implementation of changes, upgrades, and preventative maintenance.\n\n* Troubleshoot hardware, software, and vendor incidents to resolution, and identify remediation improvements.\n\n* Develop and leverage automation and monitoring tools to maximize visibility and recovery.\n\n* Communicate and collaborate across teams and levels to manage projects or incidents.\n\n* Mentor and guide team members to enhance their technical skills and professional growth.\n\n\n\n\nYou have:\n\n\n* Strong working knowledge in configuring and troubleshooting routing protocols (BGP, OSPF, and static).\n\n* Extensive experience with data center and cloud-based networking technologies and infrastructure (LAN, WAN, firewall, SDWAN, BGP, DNS, load balancing, VPN, etc.).\n\n* Experience with Arista and Cisco configurations and maintenance.\n\n* Deep understanding of network protocols and services.\n\n* Extensive experience in Linux environments and enterprise distros.\n\n* Experience with software development and strong scripting skills.\n\n* Experience with Palo Alto firewall configurations and maintenance.\n\n* Experience with F5 LTM and AFM configurations and maintenance.\n\n* Experience with networking and securing Kubernetes with Calico.\n\n* Experience with cloud technologies and IaC deployments.\n\n* Experience with GCP, AWS, Azure cloud environments (Certifications preferred).\n\n* Experience with virtual and containerized deployments in both data center and cloud.\n\n* Experience with Kubernetes and GKE deployments and networking elements (CNI, Istio, Calico).\n\n* Experience with CI/CD pipeline components, support, functionality, and tools.\n\n* Experience with version control concepts and operations (Git).\n\n* Experience with data formats XML, JSON, YAML and parsing with Python data structures.\n\n* Experience working within an Agile development environment.\n\n* Experience with webhooks, API styles, HTTP response codes, and authentication mechanisms.\n\n* Experience with Ansible deployments and creating Ansible playbooks.\n\n* Experience with Jenkins and parameterization.\n\n* Use of automation tools and modules (Rundeck/Puppet/Terraform).\n\n* Experience with Network Automation and Programmability Abstraction Layer with Multivendor (NAPALM) framework.\n\n* Leverage model-driven programmability within an Arista networking environment.\n\n* Experience with cloud infrastructure such as Compute, Network, Storage, and Backup.\n\n* Understand the need to organize code into methods, functions, classes, and modules.\n\n* Experience with monitoring performance metrics and KPIs.\n\n\n\n\nAdditional Requirements:\n\n\n* Collect feedback and requirements from design and technical staff.\n\n* Create diagrams, business cases, and architectural design documents.\n\n* Support on-call and weekend rotation as needed.\n\n* Collaborate with cross-functional teams.\n\n* Able to handle stressful situations with a level-headed approach.\n\n* Excellent verbal and writing skills (English).\n\n\n\n\nNOTE: This is position in Poland. Please apply only if you are in Poland\n\nBenefits: \n\n\n* Health: medical, dental, and vision\n\n* Development: Native AI learning\n\n\n\n\n#LI-Remote\n\nWhy youโll love working here:\n\nAs leaders in enterprise customer conversations, we celebrate diversity, empowering our team to forge impactful conversations globally. LivePerson is a place where uniqueness is embraced, growth is constant, and everyone is empowered to create their own success. And, we're very proud to have earned recognition from Fast Company, Newsweek, and BuiltIn for being a top innovative, beloved, and remote-friendly workplace. \n\nBelonging at LivePerson: \n\nWe are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local law.\n\n We are committed to the accessibility needs of applicants and employees. We provide reasonable accommodations to job applicants with physical or mental disabilities. Applicants with a disability who require reasonable accommodation for any part of the application or hiring process should inform their recruiting contact upon initial connection.\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Cloud, API and Engineer jobs that are similar:\n\n
$57,500 — $92,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nKatowice, Silesian Voivodeship, Poland
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe currently seeking a Senior Data Engineer with 5-7 yearsโ experience. The ideal candidate would have the ability to work independently within an AGILE working environment and have experience working with cloud infrastructure leveraging tools such as Apache Airflow, Databricks, DBT and Snowflake. A familiarity with real-time data processing and AI implementation is advantageous. \n\n\n\nResponsibilities:\n* Design, build, and maintain scalable and robust data pipelines to support analytics and machine learning models, ensuring high data quality and reliability for both batch & real-time use cases.\n* Design, maintain, optimize data models and data structures in tooling such as Snowflake and Databricks. \n* Leverage Databricks for big data processing, ensuring efficient management of Spark jobs and seamless integration with other data services.\n* Utilize PySpark and/or Ray to build and scale distributed computing tasks, enhancing the performance of machine learning model training and inference processes.\n* Monitor, troubleshoot, and resolve issues within data pipelines and infrastructure, implementing best practices for data engineering and continuous improvement.\n* Diagrammatically document data engineering workflows. \n* Collaborate with other Data Engineers, Product Owners, Software Developers and Machine Learning Engineers to implement new product features by understanding their needs and delivery timeously. \n\n\n\nQualifications:\n* Minimum of 3 years experience deploying enterprise level scalable data engineering solutions.\n* Strong examples of independently developed data pipelines end-to-end, from problem formulation, raw data, to implementation, optimization, and result.\n* Proven track record of building and managing scalable cloud-based infrastructure on AWS (incl. S3, Dynamo DB, EMR). \n* Proven track record of implementing and managing of AI model lifecycle in a production environment.\n* Experience using Apache Airflow (or equivalent) , Snowflake, Lucene-based search engines.\n* Experience with Databricks (Delta format, Unity Catalog).\n* Advanced SQL and Python knowledge with associated coding experience.\n* Strong Experience with DevOps practices for continuous integration and continuous delivery (CI/CD).\n* Experience wrangling structured & unstructured file formats (Parquet, CSV, JSON).\n* Understanding and implementation of best practices within ETL end ELT processes.\n* Data Quality best practice implementation using Great Expectations.\n* Real-time data processing experience using Apache Kafka Experience (or equivalent) will be advantageous.\n* Work independently with minimal supervision.\n* Takes initiative and is action-focused.\n* Mentor and share knowledge with junior team members.\n* Collaborative with a strong ability to work in cross-functional teams.\n* Excellent communication skills with the ability to communicate with stakeholders across varying interest groups.\n* Fluency in spoken and written English.\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, Senior, Junior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\n\n \n \n \n\n\n\n\nThe data science community is diverse in skill sets, objectives, and preferences for tools and workflows. Through our innovative visualization tools and software, we enable the creation, analysis and sharing of data-driven insights across the globe. As a member of Plotlyโs Engineering team, you will be part of a group that is passionate about solving complex problems and enabling a seamless user experience. Our team thrives on autonomy, collaboration, continuous learning, and pushing the boundaries of what's possible in the data viz space. As a member of our team, you will have the opportunity to work on cutting-edge technologies and contribute to innovative solutions that empower our customers to make data-driven decisions. You'll be working with a diverse team of experts across the organization who are committed to excellence and thrive in a fast-paced, dynamic environment.\n\n \n\n\n \n\nWe are seeking a highly skilled and experienced Senior Software Developer in Test (SDET) to join our dynamic team. As a Senior SDET, you will play a critical role in ensuring the quality and reliability of our products through the design, development, and execution of comprehensive automated testing strategies. You will collaborate closely with cross-functional teams, including developers, product managers, and quality assurance engineers, to identify areas for increasing our automated test coverage and enhance the overall software development process.\n\nThe technologies you would be working with include: \n\n\n* Cypress\n\n* JavaScript/TypeScript\n\n* React\n\n* Python\n\n* NestJS\n\n* Kubernetes for infrastructure orchestration\n\n* Cloud Providers: AWS, Azure, GCP (consumer and enterprise-level solutions) \n\n\n\nResponsibilities:\n\n\n* Design, develop, and maintain automated test scripts and test suites for functional, performance, and regression testing for APIs, Platform and UI software components.\n\n* Create detailed, comprehensive, and well-structured test plans, and test cases.\n\n* Test existing current products to identify, isolate, and track defects.\n\n* Perform manual tests, when necessary, to maintain a balanced approach alongside automated testing.\n\n* Ensure products meet business and technical requirements, customer expectations as well as performance and reliability standards.\n\n* Contribute to the strategic planning of Plotly's overall product testing strategy.Serve as a knowledgeable resource for testing automation, providing training and technical guidance to team members as needed.\n\n* Actively participate in code reviews, design discussions, and project planning meetings.\n\n* Use your creativity, curiosity, and resourcefulness to increase the quality at Plotly.\n\n* Block software releases if they donโt meet your standards (donโt worry, weโll have your back!)\n\n* Help cultivate an environment of exceptional software quality. Educate and help others understand why theyโll soon love product quality as much as you do.\n\n* Train, mentor and educate fellow team members.\n\n\n\nJob Requirements:\n\n\n* Bachelor's degree in computer science or a related field.\n\n* 5+ years of related professional experience as a software developer or software developer in test\n\n* Proficient in writing test cases, developing automated scripts, and utilizing automation tool frameworks and maintaining test data sets.\n\n* Experience with load and performance testing including design, development, implementation and reporting.\n\n* Familiarity with working in a containerized (Docker, Kubernetes) environment. \n\n* Experience with test automation frameworks and scripting programing languages such as Cypress, JavaScript and TypeScript.\n\n* Experience in testing API / Restful services.\n\n* Excellent organizational skills to handle multiple tasks within project timelines.\n\n* Effective communication skills for collaborating with cross-functional teams.\n\n* Demonstrates a strong passion for continuous learning and staying updated with emerging technologies, industry trends, and best practices in software testing and quality assurance.\n\n\n\n\nBonus Points \n\n\n* Experience with Python programming language\n\n* Exposure to data science and machine learning concepts.\n\n* Familiarity with Continuous Integration (CI) environments, particularly using Github Actions.\n\n* Knowledge of GitHub, CI, and DevOps practices.\n\n\n\n\nDonโt meet all the requirements, but you feel you would be a great fit to our plot-legion? Donโt hesitate to apply!\n\n \n\n\n \nWhat you can expect from us:\n\n\n\nHealth & Wellbeing\n\n\n* Comprehensive health coverage\n\n* Generous PTO \n\n* Parental leave top-up program\n\n\n\n\n\n\nGrowth & Future\n\n\n* Stock options for all full-time employees\n\n* Learning & development program\n\n* Work alongside a dedicated team \n\n\n\n\n\n\nFlexibility & Community\n\n\n* Remote-first work\n\n* Home office support\n\n* Employee led DE&I resource group\n\n* Plotly Community Forum\n\n\n\n\n\n\n\n\nWhy Plotly?\n\nUnleash your creativity and shape the future of data analytics! \n\nFounded by innovators and driven by our community of users and customers, we eagerly tackle every challenge, from crafting state-of-the-art UI for seamless data interaction to optimizing our graphing libraries and services for highly reliable performance. Our journey has only begun! \n\nWe are a tight-knit and quickly growing team where each member can make an immediate, meaningful impact. We take on complex problems, work hard, and are firm believers in the open-source mission. At Plotly, you'll work alongside a diverse team of first-class engineers, developers, scientists, and builders that challenge the status quo and set a high bar. We encourage each member of our team to explore and expand their skill sets continually, and to approach every problem with curiosity and an open mind. Together, we make it possible for people everywhere to share data and insights that make real impacts in business and around the world.\n\nPlotly is an equal-opportunity employer and does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status, or any other basis covered by appropriate law. If you require any accommodations, please let us know during the application process. \n\n\n\n \n\nDรฉveloppeur logiciel sรฉnior en test\n\n \n\nLa communautรฉ de la science des donnรฉes est diversifiรฉe en termes de compรฉtences, d'objectifs et de prรฉfรฉrences en matiรจre d'outils et de flux de travail. Grรขce ร nos outils et logiciels de visualisation innovants, nous permettons la crรฉation, l'analyse et le partage d'informations basรฉes sur les donnรฉes ร travers le monde.\n\nEn tant que membre de l'รฉquipe d'ingรฉnierie de Plotly, vous ferez partie d'un groupe passionnรฉ par la rรฉsolution de problรจmes complexes et par une expรฉrience utilisateur transparente. Notre รฉquipe se nourrit d'autonomie, de collaboration, d'apprentissage continu, et repousse les limites du possible dans le domaine de la visualisation de donnรฉes. En tant que membre de notre รฉquipe, vous aurez l'opportunitรฉ de travailler sur des technologies de pointe et de contribuer ร des solutions innovantes qui permettent ร nos clients de prendre des dรฉcisions basรฉes sur des donnรฉes. Vous travaillerez avec une รฉquipe diversifiรฉe d'experts ร travers l'organisation qui sont engagรฉs ร l'excellence et s'รฉpanouissent dans un environnement dynamique et rapide.\n\n \n\n\n \n\nNous sommes ร la recherche d'un dรฉveloppeur de logiciels en test (SDET) hautement qualifiรฉ et expรฉrimentรฉ pour rejoindre notre รฉquipe dynamique. En tant que SDET senior, vous jouerez un rรดle essentiel en garantissant la qualitรฉ et la fiabilitรฉ de nos produits grรขce ร la conception, au dรฉveloppement et ร l'exรฉcution de stratรฉgies de test automatisรฉes complรจtes. Vous collaborerez รฉtroitement avec des รฉquipes interfonctionnelles, notamment des dรฉveloppeurs, des chefs de produit et des ingรฉnieurs en assurance qualitรฉ, afin d'identifier les domaines dans lesquels il est possible d'accroรฎtre la couverture de nos tests automatisรฉs et d'amรฉliorer le processus de dรฉveloppement logiciel dans son ensemble.\n\nLes technologies avec lesquelles vous travaillerez sont les suivantes:\n\n\n* Cypress\n\n* Python \n\n* JavaScript/TypeScript\n\n* React\n\n* NestJS\n\n* Kubernetes pour l'orchestration de l'infrastructure\n\n* Fournisseurs de cloud : AWS, Azure, GCP (solutions grand public et d'entreprise) \n\n\n\nPrincipales responsabilitรฉs:\n\n\n* Concevoir, dรฉvelopper et maintenir des scripts de tests automatisรฉs et des suites de tests pour les tests fonctionnels, de performance et de rรฉgression pour les API, la plateforme et les composants logiciels de l'interface utilisateur.\n\n* Crรฉer des plans de test dรฉtaillรฉs, complets et bien structurรฉs, ainsi que des cas de test.\n\n* Tester les produits existants afin d'identifier, d'isoler et de suivre les dรฉfauts.\n\n* Effectuer des tests manuels, si nรฉcessaire, afin de maintenir une approche รฉquilibrรฉe avec les tests automatisรฉs.\n\n* Veiller ร ce que les produits rรฉpondent aux exigences commerciales et techniques, aux attentes des clients ainsi qu'aux normes de performance et de fiabilitรฉ.\n\n* Contribuer ร la planification stratรฉgique de la stratรฉgie globale de test des produits de Plotly. Servir de ressource compรฉtente pour l'automatisation des tests, en fournissant une formation et des conseils techniques aux membres de l'รฉquipe si nรฉcessaire.\n\n* Participer activement aux revues de code, aux discussions sur la conception et aux rรฉunions de planification de projet.\n\n* Vous avez la capacitรฉ d'utiliser votre crรฉativitรฉ, votre curiositรฉ et votre ingรฉniositรฉ pour amรฉliorer la qualitรฉ de Plotly.\n\n* Bloquer les versions logicielles si elles ne rรฉpondent pas ร vos critรจres (ne vous inquiรฉtez pas, nous vous soutiendrons !).\n\n* Contribuer ร cultiver un environnement de qualitรฉ logicielle exceptionnelle. Sensibiliser et aider les autres ร comprendre pourquoi ils aimeront bientรดt la qualitรฉ des produits autant que vous.\nFormer, encadrer et รฉduquer les autres membres de l'รฉquipe.\n\n\n\nExigences du poste:\n\n\n* Baccalaurรฉat en informatique ou dans un domaine connexe.\n\n* Plus de 5 ans d'expรฉrience professionnelle en tant que dรฉveloppeur de logiciels ou dรฉveloppeur de logiciels en test.\n\n* Vous maรฎtrisez la rรฉdaction de cas de test, le dรฉveloppement de scripts automatisรฉs, l'utilisation de cadres d'outils d'automatisation et la maintenance d'ensembles de donnรฉes de test.\n\n* Expรฉrience des tests de charge et de performance, y compris la conception, le dรฉveloppement, la mise en ลuvre et l'รฉtablissement de rapports.\n\n* Familiaritรฉ avec le travail dans un environnement conteneurisรฉ (Docker, Kubernetes).\n\n* Expรฉrience des cadres d'automatisation des tests et des langages de programmation de scripts tels que Cypress, JavaScript et TypeScript.\n\n* Expรฉrience dans les tests d'API / services Restful.\n\n* Excellentes compรฉtences organisationnelles pour gรฉrer des tรขches multiples dans le respect des dรฉlais du projet.\n\n* Compรฉtences de communication efficaces pour collaborer avec des รฉquipes interfonctionnelles.\n\n* Dรฉmontre une forte passion pour l'apprentissage continu et se tient au courant des technologies รฉmergentes, des tendances de l'industrie et des meilleures pratiques en matiรจre de tests de logiciels et d'assurance de la qualitรฉ.\n\n\n\nAtouts:\n\n\n* Expรฉrience avec le langage de programmation Python\n\n* Exposition ร la science des donnรฉes et aux concepts d'apprentissage automatique.\n\n* Familiaritรฉ avec les environnements d'intรฉgration continue (CI), en particulier avec GitHub Action.\n\n* Connaissance de GitHub, de l'intรฉgration continue et des pratiques DevOps.\n\n\n\n\nVous ne rรฉpondez pas ร toutes les exigences, mais vous pensez que vous seriez un bon candidat ? N'hรฉsitez pas ร poser votre candidature!\n\nCe que vous pouvez attendre de nous:\n\n\n\nSantรฉ et bien-รชtre\n\n\n* Couverture santรฉ complรจte \n\n* Temps libre rรฉmunรฉrรฉ gรฉnรฉreux et horaires de travail flexibles \n\n* Programme de complรฉment de congรฉ parental\n\n\n\n\n\n\nCroissance et avenir\n\n\n* Options d'achat d'actions pour tous les employรฉs ร temps plein \n\n* Programme d'apprentissage et de dรฉveloppement \n\n* Travailler avec une รฉquipe dรฉvouรฉe\n\n\n\n\n\n\nFlexibilitรฉ et communautรฉ\n\n\n* Entiรจrement ร distance \n\n* Soutien au bureau ร domicile \n\n* Groupe de ED&I dirigรฉ par les employรฉs \n\n* Forum de la communautรฉ Plotly\n\n\n\n\n\n\n\n \n\nPourquoi Plotly?\n\nLibรฉrez votre crรฉativitรฉ et faรงonnez l'avenir de l'analyse des donnรฉes! \n\nFondรฉ par des innovateurs et guidรฉ par notre communautรฉ d'utilisateurs et de clients, nous sommes impatients de relever chaque dรฉfi, de la conception d'une interface utilisateur pour une interaction transparente avec les donnรฉes ร l'optimisation de nos bibliothรจques de graphiques et de nos services pour une performance hautement fiable. Notre voyage ne fait que commencer ! \n\nNous sommes une รฉquipe soudรฉe et en pleine croissance oรน chaque membre peut avoir un impact immรฉdiat et significatif. Nous nous attaquons ร des problรจmes complexes, nous travaillons dur et nous croyons fermement en la mission de l'open-source. Chez Plotly, vous travaillerez aux cรดtรฉs d'une รฉquipe diversifiรฉe d'ingรฉnieurs, de dรฉveloppeurs, de scientifiques et de crรฉateurs de premier ordre qui remettent en question le statu quo et placent la barre trรจs haut. Nous encourageons chaque membre de notre รฉquipe ร explorer et ร รฉlargir continuellement ses compรฉtences, et ร aborder chaque problรจme avec curiositรฉ et ouverture d'esprit. Ensemble, nous rendons possible le partage de donnรฉes et d'informations qui ont un impact rรฉel sur les entreprises et le monde entier.\n\n \n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Testing, DevOps, JavaScript, Cloud, API and Senior jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.\nYou Will:\n\n\n* Architect and develop data pipelines to optimize performance, quality, and scalability\n\n* Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources\n\n* Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake\n\n* Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance\n\n* Orchestrate sophisticated data flow patterns across a variety of disparate tooling\n\n* Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics\n\n* Partner with the rest of the Data Platform team to set best practices and ensure the execution of them\n\n* Partner with the analytics engineers to ensure the performance and reliability of our data sources\n\n* Partner with machine learning engineers to deploy predictive models\n\n* Partner with the legal and security teams to build frameworks and implement data compliance and security policies\n\n* Partner with DevOps to build IaC and CI/CD pipelines\n\n* Support code versioning and code deployments for data Pipelines\n\n\n\nYou Have:\n\n\n* 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages\n\n* Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed\n\n* Demonstrated experience writing complex, highly optimized SQL queries across large data sets\n\n* Experience with cloud technologies such as AWS and/or Google Cloud Platform\n\n* Experience with Databricks platform\n\n* Experience with IaC technologies like Terraform\n\n* Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres\n\n* Experience building event streaming pipelines using Kafka/Confluent Kafka\n\n* Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker\n\n* Experience with containers and container orchestration tools such as Docker or Kubernetes\n\n* Experience with Machine Learning & MLOps\n\n* Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)\n\n* Thorough understanding of SDLC and Agile frameworks\n\n* Project management skills and a demonstrated ability to work autonomously\n\n\n\nNice to Have:\n\n\n* Experience building data models using dbt\n\n* Experience with Javascript and event tracking tools like GTM\n\n* Experience designing and developing systems with desired SLAs and data quality metrics\n\n* Experience with microservice architecture\n\n* Experience architecting an enterprise-grade data platform\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Testing, DevOps, JavaScript, Cloud, API, Senior, Legal and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\n\nPosition Description:\n\n\n* Drive automation initiatives to streamline the deployment, scaling and management of cloud resources\n\n* Collaborate with the engineering team to enhance and maintain IaC best practices, facilitate CICD processes within the SDLC\n\n* Work closely with developers to troubleshoot issues and provide guidance on infrastructure-related challenges. Collaborate with cross functional engineering teams to align infrastructure solutions with business objectives\n\n* Implement and enhance monitoring solutions for our cloud environments including creating alerts, dashboards, and visualizations that allow the identification and resolution of performance and availability issues\n\n* Design, implementation and maintenance of LetsGetChecked cloud environments\n\n* Vulnerability management of containers and EC2 instances to ensure compliance\n\n* Apply configuration changes through established change control management processes\n\n* Create and maintain comprehensive documentation including text and diagrams for the entire platform\n\n* Propose and implement improvements to enhance efficiency and reliability of the cloud infrastructure by leveraging SRE best practices. Including Observability, eliminating toil, SLOs, SLIs and SLAs\n\n* Manage, maintain and monitor multiple Kubernetes clusters\n\n* Write and maintain helm charts for our internal services\n\n* Facilitate the transition of EC2-based architecture to a more modern and advanced Kubernetes architecture\n\n\n\n\n\nRequirements:\n\n\n\n* BS in Computer Science or a related technical discipline (or equivalent experience)\n\n* Proficiency in systems scripting languages like Powershell, Bash. Experience with a programming language like python is a plus\n\n* Experience managing cloud platforms with a focus on AWS and some familiarity with Azure\n\n* Windows Server and Linux (AWS Linux and Ubuntu) administration experience (Patching, securing and maintaining vms and containers.)\n\n* Deep understanding of network protocols at different levels (IP, HTTP, DNS, etc)\n\n* Ability to quickly learn new or unfamiliar technology and products using documentation and internet resources\n\n* Familiarity with the technology stack we use is an advantage (AWS, .NET, Packer, Terraform, AppVeyor, Vault, Consul, Helm, Salt)\n\n* Conscious of the best practises with regards to security\n\n\n\n\nThe base salary range for this role is โฌ60.000 - โฌ73.000.\n\nBenefits: \n\nAlongside base salary we offer a range of benefits including: \n\n\n* Health insurance\n\n* Annual Compensation Reviews\n\n* After 90 days you will be eligible to avail of Flexible PTO. At LetsGetChecked we have a Flexible PTO policy where you are not restricted to a specific number of holiday days/annual leave\n\n* Free monthly LetsGetChecked test as we are not only focused on the well being of our patients but also the well being of our teams\n\n* A referral bonus program to reward you for helping us hire the best talent\n\n* Internal Opportunities and Careers Clinics to help you progress your career\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, Senior, Engineer and Linux jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLisbon, Lisbon, Portugal
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nRestaurant365 is a SaaS company disrupting the restaurant industry! Our cloud-based platform provides a unique, centralized solution for accounting and back-office operations for restaurants. Restaurant365โs culture is focused on empowering team members to produce top-notch results while elevating their skills. Weโre constantly evolving and improving to make sure we are and always will be โBest in Classโ ... and we want that for you too!\n\n\nRestaurant365 is looking for an experienced Data Engineer to join our data warehouse team thatenables the flow of information and analytics across the company. The Data Engineer will participate in the engineering of our enterprise data lake, data warehouse, and analytic solutions. This is a key role on a highly visible team that will partner across the organization with business and technical stakeholders to create the objects and data pipelines used for insights, analysis, executive reporting, and machine learning. You will have the exciting opportunity to shape and grow with a high performingteam and the modern data foundation that enables the data-driven culture to fuel the companyโs growth. \n\n\n\nHow you'll add value: \n* Participate in the overall architecture, engineering, and operations of a modern data warehouse and analytics platforms. \n* Design and develop the objects in the Data Lake and EDW that serve as core building blocks for the semantic layer and datasets used for reporting and analytics across the enterprise. \n* Develop data pipelines, transformations (ETL/ELT), orchestration, and job controls using repeatable software development processes, quality assurance, release management, and monitoring capabilities. \n* Partner with internal business and technology stakeholders to understand their needs and then design, build and monitor pipelines that meet the companyโs growing business needs. \n* Look for opportunities for continuous improvements that automate workflows, reduce manual processes, reduce operational costs, uphold SLAs, and ensure scalability. \n* Use an automated observability framework for ensuring the reliability of data quality, data integrity, and master data management. \n* Partner closely with peers in Product, Engineering, Enterprise Technology, and InfoSec teams on the shared enterprise needs of a data lake, data warehouse, semantic layer, transformation tools, BI tools, and machine learning. \n* Partner closely with peers in Business Intelligence, Data Science, and SMEs in partnering business units o translate analytics and business requirements into SQL and data structures \n* Responsible for ensuring platforms, products, and services are delivered with operational excellence and rigorous adherence to ITSM process and InfoSec policies. \n* Adopt and follow sound Agile practices for the delivery of data engineering and analytics solutions. \n* Create documentation for reference, process, data products, and data infrastructure \n* Embrace ambiguity and other duties as assigned. \n\n\n\nWhat you'll need to be successful in this role: \n* 3-5 years of engineering experience in enterprise data warehousing, data engineering, business intelligence, and delivering analytics solutions \n* 1-2 years of SaaS industry experience required \n* Deep understanding of current technologies and design patterns for data warehousing, data pipelines, data modeling, analytics, visualization, and machine learning (e.g. Kimball methodology) \n* Solid understanding of modern distributed data architectures, data pipelines, API pub/sub services \n* Experience engineering for SLA-driven data operations with responsibility for uptime, delivery, consistency, scalability, and continuous improvement of data infrastructure \n* Ability to understand and translate business requirements into data/analytic solutions \n* Extensive experience with Agile development methodologies \n* Prior experience with at least one: Snowflake, Big Query, Synapse, Data bricks, or Redshift \n* Highly proficient in both SQL and Python for data manipulation and assembly of Airflow DAGโs. \n* Experience with cloud administration and DevOps best practices on AWS and GCP and/or general cloud architecture best practices, with accountability cloud cost management \n* Strong interpersonal, leadership and communication skills, with the ability to relate technical solutions to business terminology and goals \n* Ability to work independently in a remote culture and across many time zones and outsourced partners, likely CT or ET \n\n\n\nR365 Team Member Benefits & Compensation\n* This position has a salary range of $94K-$130K. The above range represents the expected salary range for this position. The actual salary may vary based upon several factors, including, but not limited to, relevant skills/experience, time in the role, business line, and geographic location. Restaurant365 focuses on equitable pay for our team and aims for transparency with our pay practices. \n* Comprehensive medical benefits, 100% paid for employee\n* 401k + matching\n* Equity Option Grant\n* Unlimited PTO + Company holidays\n* Wellness initiatives\n\n\n#BI-Remote\n\n\n$90,000 - $130,000 a year\n\nR365 is an Equal Opportunity Employer and we encourage all forward-thinkers who embrace change and possess a positive attitude to apply. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, InfoSec, Python, Accounting, DevOps, Cloud, API and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nHi there! Thanks for stopping by =]\n\n \n\nAre you actively looking for a new opportunity? Or just checking the market? Wellโฆ you might just be in the right place!\n\n \n\nWeโre looking for a SRE/DevOps engineer to join our team in Tbilisi. This team of experts enables our development teams to efficiently build and run great software to excite our customers. Currently the SRE team is operating 5 different product platforms, mostly running on AWS Cloud infrastructure. Utilizing a variety of technologies including Kubernetes, Elasticsearch, Kafka, Lambdaโs etc.\n\n \n\nYouโll be focussing on the strategic expansion and scalability improvements of our flagship product - while being part of the global team to keep the lights on on all platforms.\n\nAs an SRE youโll work closely together with engineers and product managers around the world in order to provide tailored and innovative solutions for the market.\n\nLightspeed is happy to offer relocation for this role.\n\n \n\nโ What youโll be responsible for:\n\n\n* Initiate and contribute to continuous improvement of our software delivery processes and practices in a multi-location, multidisciplinary team to empower and accelerate product development\n\n* Use automation extensively to design, configure, manage, and monitor systems in support of our product development teams\n\n* Design and architect operational solutions with the specific goal of increasing the standardization, automation, repeatability, cost-efficiency and consistency of operational tasks\n\n* Work with developers and other SRE to design and build scalable and reliable Cloud cost efficient infrastructure\n\n* Write and maintain architectural, stakeholder, policy and processes documentation\n\n* Adhere to and advocate for best practices, including Infrastructure as Code, monitoring, high availability, disaster recovery, security, and DevOps methodologies\n\n* Collaborate with development teams and use intuition, experience and understanding to create SLIs, SLOs, and SLAs\n\n* Provide timely assistance and remediation solutions during critical situations and production incidents to help resolve service problems (You will be on call for periods of time)\n\n\n\n\n \n\nโ Must have skills:\n\n\n* Kubernetes/Docker\n\n* Bash or Python or Ruby or any other backend language (programing skills)\n\n* Cloud experience, preferably AWS or GCP\n\n* Good experience provisioning and managing infrastructures with high availability constraints\n\n* Good communication skills in English and Russian\n\n\n\n\n \n\nโ Nice to have:\n\n\n* Terraform, Config Management (puppet/chef/ansible/salt)\n\n* Knowing how to work with Data & Linux systems (ElasticSearch/ Kafka/MySql or any other database)\n\n* CI/CD\n\n\n\n\n \n\nโ Whatโs in it for you\n\n\n* Lots of autonomy. Flexible work culture, and the possibility of remote work.\n\n* Everyone matters. Day by day, we improve all our products and processes. We have a global team and flexible work culture that provides many opportunities to grow and develop your career.\n\n* We care. We provide Macbooks for our team members so they can take it anywhere and work from any place. We provide training and educational materials to keep everyone updated.\n\n* We have a concierge service. If you need to meet someone at the airport, get delivery, or even change tires on your car, we have a team that will do that for you.\n\n* We are reliable. Lightspeed was founded in 2004 and remains profitable and self-sustaining. Lightspeed is a public company, and we provide RSU for our employees. We have a head office in Montreal, Canada, and now we are opening offices in Tbilisi and Yerevan. We also have more than 25 offices worldwide, from France to New Zealand.\n\n\n\n\n \n\nWho We Are\n\nPowering the businesses that are the backbone of the global economy, Lightspeed's one-stop commerce platform helps merchants innovate to simplify, scale, and provide exceptional customer experiences. Our cloud commerce solution transforms and unifies online and physical operations, multichannel sales, expansion to new locations, global payments, financial solutions, and connection to supplier networks.\n\nFounded in Montrรฉal, Canada in 2005, Lightspeed is dual-listed on the New York Stock Exchange (NYSE: LSPD) and Toronto Stock Exchange (TSX: LSPD). With teams across North America, Europe, and Asia Pacific, the company serves retail, hospitality, and golf businesses in over 100 countries. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Cloud, Ruby, Senior, Engineer, Linux, Backend and Digital Nomad jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nTbilisi, Tbilisi, Georgia
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
[โ click to see all jobs](https://www.notion.so/Open-Jobs-at-Ampcontrol-io-e2879a73ba344daea679556a7391fcaf)\n\n## ๐About [Ampcontrol](https://www.ampcontrol.io/)\n\nWe're Ampcontrol, building AI-powered software for optimizing electric vehicle (EV) charging.\n\nWe are a venture backed remote team of engineers and energy experts based in the U.S and Europe building the new way of EV charging. Our primary goals are to enable companies to provide higher capacity charging on existing infrastructure as well as optimization of fleet charging logistics.\n\n([link to website](https://www.ampcontrol.io/))\n\n## ๐ Our Mission\n\nWe're on a mission to help the automotive industry transition to 100% electric vehicles.\n\nWe believe in a future of self-managing, reliable, and affordable charging for companies, fleet operators, and humans on our planet.\n\n## ๐ป The Role\n\n- All levels of experience. The position will be accordingly\n- You'll be building and improving our python backend system, including the core optimization system and our customer facing APIs\n- Maintain and improve test environment\n- Develop our Python3/FastAPI service further, with an eye on performance and scalability\n- Work with data scientists to build a stable and powerful architecture for ML-applications for real-time optimization\n- Build, maintain, migrate databases and accommodate time-series data\n- Write clean and easily maintainable code for our optimization engine with a focus on reliability and scalability\n\n## โญ You have\n\n- Professional experience in python software development or QA engineering\n- Experience in at least one cloud computing platform\n- A good understanding of DevOps tools and methods, including end-to-end testing\n- Fluency in English for verbal and written communication is required\n- Motivated to work in the electric vehicles and sustainability industries\n- Experience with PostgreSQL and Redis preferred\n\n\n## ๐ Location\n\n**We're a remote team.** You can work from America, United States, Canada, Europe. \n\nPlease mention the word **PRODIGIOUSLY** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$40,000 — $140,000/year\n
\n\n#Location\nWorldwide
# How do you apply?\n\nJust message to **[[email protected]](mailto:[email protected])**\n\nSend it along with your CV and github account if you have one or a piece of code and if you want to stand out - answer one of the following questions.\n\n- What surprising thing have you learned in the last few months?\n- What is a project you're most proud of and why?
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
## Overview\nPupil Labs is the world-leading provider of wearable eye-tracking solutions. We design, engineer, build, and ship hardware (eye tracking glasses) and software (capture, storage, visualization, and analysis tools) that are used by thousands of researchers in a variety of fields, ranging from medicine and psychology to UX design and human-computer-interaction.\n\nYou will be working on Pupil Cloud - cloud-based storage, visualization, enrichment, and analysis platform. This product addresses a number of exciting computational and infrastructural challenges that will involve close collaboration with our R&D and Design teams. \n\n## Requirements\n* 5+ years of production experience\n* DevOps based around Kubernetes + Docker\n* Experience with Chef/Ansible/Puppet\n* Solid grasp of Python\n* Understands security\n* Experience with web-based services\n* Monitoring, implementing, and ensuring reliability of HA systems\n* Experience with message queues\n* Load/stress testing\n* Part of 24x7 on-call rota\n\n## Technology we use\n* Docker\n* Kubernetes\n* Postgresql\n* Python\n* Javascript (node)\n* Redis\n* Nginx\n* Grafana\n* Prometheus\n\n## Services we use\n* Gitlab\n* DigitalOcean\n* Hetzner\n* Amazon AWS\n* Sentry\n* Google/Firebase \n\nPlease mention the words **TUNNEL DECLINE PET** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Python, Senior, Backend, JavaScript, Amazon and Cloud jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Location\nWorldwide
# How do you apply?\n\nSend your CV with introduction email. Please apply only if you fit the requirements. We are looking for backend devs with 5+ years of experience. (No junior devs or recruiters please!)
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.