\nItโs an exciting time to join Metrika! A Series A-funded startup in growth-mode with teammates across the US, Canada, UK, and Europe, we are building the world's premier operational intelligence platform for blockchain networks. Metrika partners with blockchain protocols, foundations, and node runners to help them and their community members analyze individual and network-wide metrics of their Distributed Ledger Technology (DLT) networks to maintain and improve their performance, security, and reliability.\n\n\n\nThese are the early days of our platform, and as a Senior Data Engineer you will be able to contribute, influence, and take ownership in significant parts of our systems. Our goal is to build a very high performance platform, capable of analyzing thousands of transactions across multiple blockchain networks in real-time.\n\n\n\nIf you are a Senior Data Engineer, with a solid understanding of data lakes, data warehouses, ETL, distributed systems, passion for your work and would love to work with a geographically distributed team, in an emerging industry join us! No prior experience in blockchain necessary, but an interest in learning and being deeply immersed is.\n\n\n\nWhat you'll be doing:\n\n\n\n* \nDesigning, implementing and maintaining data processing pipelines โ this includes ingestion, clean up, transformation, aggregation, batch and streaming jobs, as well managing the data lifecycle to ensure affordable and performant long-term storage across our data stores and data lake. You will work closely with our software engineers, SREs and our Analytics team to make sure data smoothly flows across Metrika and beyond to our customers and users.\n\n* \nWorking under a Scrum or Kanban framework.\n\n* \nOwning your work. This means being proud of your work, actively striving for excellence, observing the best practices of your craft and always aiming to improve your skill.\n\n* \nUnderstanding, participating and contributing to the company goals, regardless of your role. Metrika is a small company with a very inclusive culture. We are looking for people that share those values with us.\n\n\n\n\n\n\nPlease note: Our Engineering team is predominantly based in Europe and the eastern United States. This position is currently open to those resident and currently able to work in the European Economic Area (EU, Norway, Liechtenstein), Switzerland, the UK as well the eastern United States/Canada (UTC-4/UTC-5 timezone).\n\n\n\n\nMetrika Inc. is an Equal Opportunity employer. All applicants will be considered without regard for race, color, national origin, ethnicity, gender, disability, sexual orientation, gender identity, or religion.\n\n\nWe are looking for individuals with:\n\n* \nA Bachelor's degree in Computer Science, Electrical Engineering, Physics or Mathematics. Masters or higher degrees preferred.\n\n* \nMulti-year experience in data engineering, in large-scale production environments.\n\n* \nAt Metrika we mostly use Python for data processing; most of our ETL/Data processing jobs are written in Python. You will need to have some familiarity with scheduling systems (e.g. Airflow, luigi etc.), data transformation tools (e.g. dbt), distributed compute frameworks (e.g. Apache Spark, Apache Flink, ray.io etc.), and a solid understanding of the concepts of data governance, data lineage/provenance.\n\n* \nExcellent understanding of TDD, agile development methodology and version control.\n\n* \nThe ability to function autonomously to solve problems, and deliver working software. Our remote environment and geographic distribution requires people that can work well on their own.\n\n* \nThe ability to communicate well with your team, both interactively and asynchronously, and that of being a positive, constructive team member.\n\n\n\n\n\n\nYou'll be a great fit if you have:\n\n* \nWorked and contributed to a Big Data production environment, handling multiple GB of data per day.\n\n* \nGood knowledge of Python.\n\n* \nExperience with Apache Spark, Apache Flink, Ray.io and Airflow,\n\n* \nExperience with using and building CI/CD pipelines\n\n* \nExperience with Docker/Kubernetes or Serverless environments.\n\n* \nExperience with SQS/SNS, Apache Kafka, RabbitMQ or other brokers.\n\n* \nExperience with public cloud providers, e.g. AWS, GCP, Azure, DigitalOcean etc.\n\n* \nExperience with blockchain systems.\n\n\n\nOnce you submit your application, you will receive an automated email from the recruitee.com domain within a few minutes acknowledging we have received your application. If you do not receive this email within a few minutes, please check your spam folder or other filtered folders. And to ensure our future communications reach you, please add emails from the recruitee.com domain to your safe list. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Serverless, Cloud, Node, Senior and Engineer jobs that are similar:\n\n
$65,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote job
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
People.ai is hiring a Remote Senior Software Engineer Data Modeling
\nPeople.ai is an AI-powered foundational data platform that helps customers unlock go-to-market success & growth by providing teams with solutions built specifically for their needs. Providing enhanced pipeline visibility, more actionable insights, and a single source of truth for all sales activities. People.aiโs unique dataset, consisting of trillions of sales activities, millions of deals, 160 million business contacts, and 69 approved patents related to AI-based business insights, sets the company apart. Companies such as Verizon, IBM Red Hat, Snowflake, Zoom, and Palo Alto Networks rely on their enterprise-ready, patented AI technology. \n\n\nAt People.ai, we believe that people enrich the world around them in countless ways. We believe that the more time they spend applying their creativity, resourcefulness, and critical thinking to activities that matter most in their professional life, the more effective a professional they become. Our team is a diverse, outspoken group of creatives and critical thinkers, hyper-focused on driving change and growth. We embrace different. We applaud non-traditional career paths. We're inspired by people who have made processes their own. \n\n\nAs a Senior Software Engineer on the Data Modeling team, you will build and maintain a scalable and real-time architecture for processing vast amounts of data to power the next-generation of our LLM-based query engine (SalesAI) that will answer questions of various complexities and unlock rich insights for People.aiโs customers. You will also maintain and enhance our batch processing pipelines. You will collaborate across R&D, leverage customer feedback, perform extensive data analysis, design and develop algorithms, storage formats, and APIs to unlock high performance and low latencies.\n\n\nWe value ownership highly โ the ability to take an idea through all the stages from conception to shipping a product. This reflects throughout our company, but is especially true in engineering. As an engineer atPeople.ai, you'll be a part of a highly independent and autonomous team. Since we're building out a robust data layer that needs to be presented elegantly to the end-user, you'll be working with a large array of different technologies and fields. Expect lots of interesting challenges.\n\n\n\nResponsibilities:\n* Design and implement core backend services and data pipelines.\n* Ensure high data quality and accuracy of generated insights.\n* Document design choices and operational knowledge to successfully deploy and run services.\n* Provide appropriate test coverage, unit and integration testing, with focus on performance and cost efficiency for your feature ownership areas.\n* Ensure robust alerting, dashboards, and runbooks for production services are in place.\n* Collaborate within the team and with other engineering teams to build new features and products according to business needs.\n* Follow software design and development best practices and promote such practices in the team.\n\n\n\nRequirements:\n* 5+ years of professional experience working on backend systems in an enterprise environment\n* 2+ years experience programming in Python 2.x/3.x or Scala or Java\n* Professional experience with data analysis / data science tasks\n* Experience with LLMs is a plus\n* Understanding of SOA, microservices, and event-driven architecture\n* Experience with an enterprise-grade stack for scalable web apps including messaging broker, in-memory storages, NoSQL, and key-value databases\n* Strong knowledge of TDD, Unit, and automated test paradigms\n* Experience with SQL and RDBMS solutions\n* Experience with large-scale data processing (Spark)\n* Experience with Elasticsearch and/or Kafka is preferred\n* Experience with containerized applications, Docker, and Kubernetes\n* Possess a DevOps mindset, AWS experience is a plus\n* Bachelorโs Degree in Computer Science, Computer Engineering, or in a closely related discipline\n\n\n\n\n$78,000 - $90,000 a year\n\nHeadquartered in San Francisco, CA, People.ai is backed by Y Combinator and Silicon Valleyโs top investors, including ICONIQ Capital, Andreessen Horowitz, Lightspeed Venture Partners, Akkadian Ventures, and Mubadala Capital. People.ai has received recognition in the Gartner Market Guide for Revenue Intelligence Platforms, Inc. 5000 fastest-growing companies list, The Deloitte Technology Fast 500 list, the Y Combinator Top Companies List, and the Forbes AI 50 list in 2022. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Senior, Sales, Engineer and Backend jobs that are similar:\n\n
$70,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nPoland
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
What this job can offer you\n\nThis is an exciting time to join the growing Data Team at Remote, which today consists of over 15 Data Engineers, Analytics Engineers and Data Analysts spread across 10+ countries. Throughout the team we're focused on driving business value through impactful decision making. We're in a transformative period where we're laying the foundations for scalable company growth across our data platform, which truly serves every part of the Remote business. This team would be a great fit for anyone who loves working collaboratively on challenging data problems, and making an impact with their work. We're using a variety of modern data tooling on the AWS platform, such as Snowflake and dbt, with SQL and python being extensively employed.\n\nThis is an exciting time to join Remote and make a personal difference in the global employment space as a Senior Data Engineer, joining our Data team, composed of Data Analysts and Data Engineers. We support the decision making and operational reporting needs by being able to translate data into actionable insights to non-data professionals at Remote. Weโre mainly using SQL, Python, Meltano, Airflow, Redshift, Metabase and Retool.\nWhat you bring\n\n\n* 5+ years of experience in data engineering; high-growth tech company experience is a plus\n\n* Strong experience with building data extraction/transformation pipelines (e.g. Meltano, Airbyte) and orchestration platforms (e.g. Airflow)\n\n* Strong experience in working with SQL, data warehouses (e.g. Redshift) and data transformation workflows (e.g. dbt)\n\n* Solid experience using CI/CD (e.g. Gitlab, Github, Jenkins)\n\n* Experience with data visualization tools (e.g. Metabase) is considered a plus\n\n* A self-starter mentality and the ability to thrive in an unstructured and fast-paced environment\n\n* You have strong collaboration skills and enjoy mentoring\n\n* You are a kind, empathetic, and patient person\n\n* Writes and speaks fluent English\n\n* It's not required to have experience working remotely, but considered a plus\n\n\n\nKey Responsibilities\n\n\n* Playing a key role in Data Platform Development & Maintenance:\n\n\n* Managing and maintaining the organization's data platform, ensuring its stability, scalability, and performance.\n\n* Collaboration with cross-functional teams to understand their data requirements and optimize data storage and access, while protecting data integrity and privacy.\n\n* Development and testing architectures that enable data extraction and transformation to serve business needs.\n\n\n\n\n\n* Improving further our Data Pipeline & Monitoring Systems:\n\n\n* Designing, developing, and deploying efficient Extract, Load, Transform (ELT) processes to acquire and integrate data from various sources into the data platform.\n\n* Identifying, evaluating, and implementing tools and technologies to improve ELT pipeline performance and reliability.\n\n* Ensuring data quality and consistency by implementing data validation and cleansing techniques.\n\n* Implementing monitoring solutions to track the health and performance of data pipelines and identify and resolve issues proactively.\n\n* Conducting regular performance tuning and optimization of data pipelines to meet SLAs and scalability requirements.\n\n\n\n\n\n* Dig deep into DBT Modelling:\n\n\n* Designing, developing, and maintaining DBT (Data Build Tool) models for data transformation and analysis.\n\n* Collaboration with Data Analysts to understand their reporting and analysis needs and translate them into DBT models, making sure they respect internal conventions and best practices.\n\n\n\n\n\n* Driving our Culture of Documentation:\n\n\n* Creating and maintaining technical documentation, including data dictionaries, process flows, and architectural diagrams.\n\n* Collaborating with cross-functional teams, including Data Analysts, SREs (Site Reliability Engineers) and Software Engineers, to understand their data requirements and deliver effective data solutions.\n\n* Sharing knowledge and offer mentorship, providing guidance and advice to peers and colleagues, creating an environment that empowers collective growth\n\n\n\n\n\n\n\nPracticals\n\n\n* You'll report to: Engineering Manager - Data\n\n* Team: Data \n\n* Location: For this position we welcome everyone to apply, but we will prioritise applications from the following locations as we encourage our teams to diversify; Vietnam, Indonesia, Taiwan and South-Korea\n\n* Start date: As soon as possible\n\n\n\nRemote Compensation Philosophy\n\nRemote's Total Rewards philosophy is to ensure fair, unbiased compensation and fair equity pay along with competitive benefits in all locations in which we operate. We do not agree to or encourage cheap-labor practices and therefore we ensure to pay above in-location rates. We hope to inspire other companies to support global talent-hiring and bring local wealth to developing countries.\n\nAt first glance our salary bands seem quite wide - here is some context. At Remote we have international operations and a globally distributed workforce. We use geo ranges to consider geographic pay differentials as part of our global compensation strategy to remain competitive in various markets while we hiring globally.\n\nThe base salary range for this full-time position is $49,650 USD to 111,700 USD. Our salary ranges are determined by role, level and location, and our job titles may span more than one career level. The actual base pay for the successful candidate in this role is dependent upon many factors such as location, transferable or job-related skills, work experience, relevant training, business needs, and market demands. The base salary range may be subject to change.\nApplication process\n\n* Interview with recruiter\n\n* Interview with future manager\n\n* Async exercise stage \n\n* Interview with team members\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Testing, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRio de Janeiro, Rio de Janeiro, Brazil
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Phaidra is hiring a Remote Senior Software Engineer
Who You Are\n\nWe are looking for a driven Software Engineer (MLOps) to be a part of our growing AI Platform team. You are bold and creative, and have deep empathy for customers who may not be tech-savvy. You will design and implement significant parts of the code base and will have the opportunity to make an immediate impact with your work and guide the product and team as we grow.\n\nYou are curious and like to understand technologies and their tradeoffs in depth - providing technical guidance to the team and peers as and when required. Leading by example, you have accumulated a wealth of insights and experiences from your hands-on involvement in the field, and you are committed to rolling up your sleeves and getting work done. You like joining and supporting other engineers in their work to learn from them as well as letting them benefit from your expertise and experience.\n\nYou have the motivation and skills to identify technical product needs, initiate projects and owning their delivery, including the involvement of engineering peers as needed. You are comfortable with challenging the status quo respectfully to drive and deliver technical excellence in the team.\n\n**We are seeking a team member located within one of the following areas: USA/Canada/UK\nResponsibilities\n\nThe AI Platform team you are joining is responsible for building the core platform that powers model training, inference and decision making in our products. Furthermore the team owns MLOps and the services hosting our AI capabilities. Productionizing results from Research, as well as extending our systems and providing support according to our customer needs fall into team responsibilities as well. You will join this team as an experienced engineer with a focus on MLOps solutions to grow our expertise in that area, but also contribute as a software engineer more widely in the team.\n\nAs an organization, we strongly believe in expertise across the stack. As such, you will experience flavors of Machine Learning, Software Engineering, Distributed Systems, MLOps and DevOps.\n\nIn particular, you will:\n\n\n* Design, build and lead the MLOps initiatives and vision for the AI Platform to strengthen automation, orchestration, versioning, observability, monitoring and collaboration for the platform.\n\n* Build and design scalable components for the AI Platform to allow high throughput training and inference for RL agents doing realtime inference for autonomous control of industrial systems.\n\n* Contribute to the design and implementation of the product backend by writing REST & gRPC API services and scalable event-driven backend applications.\n\n* Design clear, extensible software interfaces for the team's customers and maintain a high release quality bar.\n\n* Design and optimize data storage & retrieval mechanisms for high throughput, security & ease of access.\n\n* Perform DevOps duties of CI/CD, Release & Deployment management.\n\n* Be a part of our global production oncall team and, own & operate your services in production, meeting Phaidraโs high bar for operational excellence.\n\n* Lead cross-functional initiatives collaborating with engineers, product managers and TPM across teams.\n\n* Mentor your peers and be a technical role-model in the team.\n\n\n\nOnboarding\n\nIn your first 30 daysโฆ\n\n\n* You will be immersed in an onboarding program that introduces you to Phaidra and our product.\n\n* You will spend time in the Engineering org, learning how the teams operate, interact, and approach problems.\n\n* You will read various parts of our handbook and familiarize yourself with the documentation culture at Phaidra.\n\n* You will set up your development environment and start working on an onboarding exercise that will introduce you to various parts of our code base.\n\n* You will learn about how we use agile and be able to navigate our sprint boards and backlogs.\n\n* You will learn about various team standards and development & release processes.\n\n* You will start to learn about our system architecture and infrastructure.\n\n* You will start picking up few good โfirst-tasksโ to get yourself accustomed to the end to end release flow.\n\n\n\n\nIn your first 60 daysโฆ\n\n\n* You will get a solid understanding of what Phaidra does and how we do it.\n\n* You will meet with team members across Phaidra and started building relationships that will help you be successful at your job.\n\n* You will complete the onboarding exercise and will be on your way to completing your first production task.\n\n* You will take ownership for the MLOps work on the team, identify gaps and propose roadmap items on the topic.\n\n\n\n\nIn your first 90 daysโฆ\n\n\n* You will be fully integrated in the team and with team members across the company.\n\n* You will have a more in-depth understanding of our system architecture and infrastructure.\n\n* You will complete your first on-call experience helping monitor and improve our production environments.\n\n* You will become an expert with our tooling.\n\n* You will start to contribute to knowledge sharing throughout Phaidra and the team.\n\n* You will take proactively drive MLOps topics in the team and represent it technically throughout the company.\n\n\n\nKey Qualifications\n\n\n* 7+ years of work experience.\n\n* Bachelors or Masters in Computer Science, or equivalent experience.\n\n* Strong experience on designing and implementing MLOps solutions for AI production systems\n\n* Expertise with production Software Engineering - relational and non-relational data modelling, micro-services, understanding of event driven systems, etc.\n\n* Strong experience building large scale multi-tenant systems with high availability, fault tolerance, performance tuning, monitoring, and statistics/metrics collection.\n\n* Strong expertise in Python and Cloud environments\n\n* Good grasp of Machine Learning (especially Deep Learning) fundamentals.\n\n* Ability to collaborate and communicate effectively in an all-remote setting\n\n* Doing your work with curiosity, ownership, transparency & directness, outcome orientation, and customer empathy.\n\n\n\nBonus\n\n\n* Experience as a service owner of a realtime production system - operating & monitoring services in production, including using observability tooling such as Prometheus, Grafana, Tempo or equivalent offerings and incident management.\n\n* Experience with building applications that can be deployed in cloud, hybrid or on prem environments\n\n* Exposure to Reinforcement Learning\n\n\n\nOur Stack\n\n\n* Languages - (Backend) Python, Go; (Frontend) JavaScript/TypeScript, React; Customer SDK & Clients - C# .NET\n\n* PyTorch\n\n* Cypress\n\n* Docker, Kubernetes, Terraform & Kapitan\n\n* Gitlab CI, ArgoCD, Atlantis, Vercel\n\n* GCP - GKE, PubSub, CloudSQL, BigTable, Postgres, etc.\n\n* Ray.io\n\n* REST & gRPC micro-services\n\n* Poetry, Pantsbuild\n\n\n\nGeneral Interview Process\n\nAll of our interviews are held via Google Meet, and an active camera connection is required.\n\n* Initial screening interview with a People Operations team member (30 minutes): The purpose of this interview is to meet you, learn more about your background, and discuss what you are looking for in a new position.\n\n* Hiring manager interview (30 minutes): The purpose of this meeting is for you to get to know the manager for the role. This chat will mainly focus on your previous experience and technical background. You can expect to talk about projects that you have worked on in the past and ask any questions about the team & role.\n\n* Technical Interview 1 (60 minutes): The purpose of this interview is to assess your skills in Machine Learning and related mathematics.\n\n* Technical Interview 2 (90 minutes): In this interview, we will go over a real world MLOps problem. You can expect to draw architecture diagrams using boxes & arrows in your browser. We will talk about system design, scalability and monitoring.\n\n* Meeting with VP of Engineering (30 minutes): This interview is a combination of technical and cultural fit assessment. You will cover the technical experience and the skills that you brinand have an opportunity to ask any questions about the teamโs culture or vision.\n\n* Culture fit interview with Phaidraโs co-founders (30 minutes): This interview focuses on alignment with Phaidraโs values\n\n\nBase Salary\n\nUS Residents: $115,200-$208,800/year\n\nUK Residents: ยฃ96,400-ยฃ144,000/year\n\nThis position will also include equity.\n\nThese are best faith estimates of the base salary range for this position. Multiple factors such as experience, education, level, and location are taken into account when determining compensation. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Cloud, API, Senior, Engineer and Backend jobs that are similar:\n\n
$65,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSeattle, Washington, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Global Fishing Watch is hiring a Remote Machine Learning Engineer Sensing
\nBackground: Global Fishing Watch is an international, non-profit organization committed to advancing ocean governance through increased transparency. We create and publicly share knowledge about human activity at sea to enable fair and sustainable use of our ocean. Founded in 2015 through a collaboration between Oceana, SkyTruth, and Google, GFW became an independent non-profit organization in 2017. Using cutting-edge technology, we create and publicly share map visualizations, data and analysis tools to enable scientific research and drive a transformation in how we manage our ocean. By 2030, we aim to monitor and map all commercial activity at sea, including all industrial fishing vessels, small-scale fishing activity, all large non-fishing vessels, and all fixed infrastructure such as aquaculture and oil rigs. We also plan to work with intergovernmental organizations and 30 governments around the globe to promote the adoption of transparency more widely and publicly share ocean data to drive better management of marine resources.\n\nThe Position\n\nThe Research and Innovation team at Global Fishing Watch (GFW) connects data science and machine learning experts with the scientific community to produce new datasets, publish impactful research, and empower others to use our data. This team harnesses satellite technology, machine learning, and big data to shed light on some of the most pressing issues facing the ocean.\n\nWe are now working to map the global footprint of commercial activity at sea, including the activity of all ocean-going vessels and fixed infrastructure. This work involves combining deep learning and data fusion techniques with petabytes of satellite imagery (radar and optical), and billions of GPS positions from vessels, mostly from the Automatic Identification System (AIS) and Vessel Monitoring Systems.\n\nThe Machine Learning Engineer will assist with large data pipelines of satellite imagery and help build computer vision models to detect and classify maritime objects in imagery data. The initial focus will be on vessel detection in high-resolution (3 m) PlanetScope optical imagery from Planet Labs, leveraging an existing model architecture developed for Sentinel-2. Subsequent work includes implementing new models to expand the detection capability to offshore infrastructure using new satellite imagery sources. The candidate will also collaborate closely with other members of the Research and Innovation team to correlate detected vessels (position, time and length) to vessels tracked by AIS. Finally, the candidate will work closely with the GFW Engineering and Product teams to ensure solutions are compatible and scalable within our cloud infrastructure. \n\nThe incumbent will gain experience working with leading researchers in the field and will interface daily with GFWโs team of data scientists and machine learning experts. They will develop further technical skills in programming, big data, and cloud computing while working for a globally diverse and fully distributed organization. The successful candidate will be organized and excited to help Global Fishing Watch develop strong partnerships and cutting-edge research. \n\nPrincipal Duties and Responsibilities\n\nModel development for small object detection\n\n\n* Design, train, and evaluate computer vision models for object detection in satellite imagery, with an emphasis on vessel detection in optical imagery \n\n* Implement preprocessing pipelines to obtain imagery and prepare it for annotation and modelling \n\n* Devise annotation strategies and tools for labelling vessels and fixed infrastructure in satellite images\n\n* Improve our training datasets and build new training datasets for other human-made objects, potentially managing external annotation services\n\n\n\n\nAdditional tasks may include\n\n\n* Provide technical support to the senior machine learning engineer(s) responsible for developing and advancing other Global Fishing Watch models\n\n* Assist data fusion efforts to integrate detections from multiple sources (e.g. Sentinel-1 SAR and Sentinel-2 optical), accounting for the recall of each model, length of the objects, cloudiness, and image resolution, among others\n\n* Analyze large amounts of data from various sources, such as vessel tracking, identity, and satellite imagery to identify trends, anomalies, and insights\n\n* Ensure the integrity and accuracy of key data pipelines and research BigQuery tables \n\n* Maintain and improve internal Python tools, such as modules and template repositories, to assist with migrating research projects from proof-of-concepts to automated prototypes\n\n* Lead or support eventual research publications and technical blog posts\n\n\n\nCandidate description\n\nSkills you should have\n\n\n* Bachelor's degree and at least four years of professional experience, or an equivalent combination of education and experience, in physical/earth sciences or a related field\n\n* Demonstrated skills and experience with Python\n\n* Strong foundation in mathematics and statistics\n\n* Familiarity working with geospatial data\n\n* Demonstrated experience working with cloud compute platforms and virtualized environments\n\n* Self-motivated with a strong curiosity and desire to learn new skills\n\n* Willingness to take ownership of projects and communicate project updates\n\n* Written and verbal communication skills in English\n\n* Ability to work with a remote team and embrace Slack, Google Suite, Jira, Notion and other collaborative tools\n\n\n\n\nAlso great\n\n\n* Some experience with database query languages such as SQL\n\n* Demonstrated experience with computer vision models\n\n* Demonstrated experience with frameworks such as TensorFlow or PyTorch\n\n* Familiarity with containerization tools like Docker and execution of models inside them\n\n* An appreciation for the complexities and rewards of collaborating in a remote, global and inclusive environment\n\n* Experience engaging with academic researchers and the peer-review process\n\n* Awareness of ethical considerations related to privacy and bias in satellite imagery analysis\n\n\n\n\nThe successful candidate will meet most, but not necessarily all, of the criteria above. Although it is obviously helpful, we do not expect that you already have a deep knowledge of building models or our key programming languages; we do expect that you have the aptitude to develop these skills and knowledge, and that you are excited about revealing human activity across the global ocean using these tools. If you donโt think you check all the boxes, but believe you have unique skills that make you a great fit for the role, we want to hear from you!\n\nAdditional Information\n\nReporting to: Senior Data Scientist / Senior Data Science Manager\n\nManages: NA\n\nLocation: Remote - we welcome candidates based in any country\n\nTerm: Permanent position\n\nFT/PT: Full-time\n\nRecruiting process\n\nA cover letter along with a CV will be requested to see how your experience and interest connect to the position. We expect the cover letter to explain details on how your skills, interests, and aspirations align with the role. If selected for consideration, the hiring process for this position will include a formal 45 minute interview with 2-3 staff followed by a 30 minute administrative screening by a Human Resources manager. Candidates advancing beyond this round will be asked to take a technical assessment. Lastly, an informal 30 minute call with 3-4 members of the Research and Innovation team will be held with finalists.\n\nPlease apply by January 26, 2024\n\nWorking Hours: Global Fishing Watch supports flexible working, so the pattern of hours may vary according to operational and personal needs. The position will be part of a global team spanning many different time zones and so the candidate should be able to accommodate semi-regular early/late meetings to be able to work effectively. Weekend work may be required on occasion. The post holder may be required to undertake regional and international travel. No overtime is payable.\n\nCompensation: A compensation range for this position is US$ 90,000-$110,000 for US-based employees - For applicants located outside of the US, the pay range will be adjusted to the country of hire. Compensation is commensurate with experience and will vary depending on the hired candidateโs country of residence, in accordance with local laws and regulations. GFW offers pension/retirement, health and other benefits commensurate with similar level GFW employees in the country of employment. The position may be a GFW employee or consultant, depending on the country of residence \n\nEqual opportunities: Global Fishing Watch is an equal opportunities employer. Global Fishing Watch is committed to promoting diversity and inclusion within our organization and in the greater ocean management and conservation community. We believe that diverse backgrounds, skills, knowledge, and viewpoints make us a stronger organization. Bringing together professionals who possess broad experiences and a spectrum of perspectives will enable us to reach our goal of improved ocean governance faster. We hire and promote qualified professionals without regard to actual or perceived race, color, religion or belief, sex, sexual orientation, gender identity, marital, or parental status, national origin, age, physical or mental disability or medical condition, or any other characteristic protected by applicable law. Our organizational goals match the urgent challenges facing our global ocean, and our mission is designed to help secure a healthy ocean for all. We are committed to building a workforce that is representative of humanityโs diversity, by providing an inclusive and welcoming environment for all employees of Global Fishing Watch and for our partners, vendors, suppliers, and contractors. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Accounting, Education, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nWashington, District of Columbia, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Kraken and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 403 4 months ago
Building the Future of Crypto \nOur Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.\nWhat makes us different? Kraken is a mission-focused company rooted in crypto values. As a Krakenite, youโll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Krakenโs focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.\nBefore you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission.\nAs a fully remote company, we have Krakenites in 60+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Kraken NFT, and Kraken Futures.\nBecome a Krakenite and build the future of crypto!\nProof of work\nThe team\nThe Integration & automation team is responsible for owning and managing the various critical integrations & automations supporting multiple business functions including HR, Finance, Engineering, IT & Security. This team works closely with our business stakeholders to identify opportunities/pain points and groom requirements to deliver best-in-class solutions using integration and automation capabilities. This team supports business critical operations & executes on strategic business projects. \nThe Sr. Engineer - Integration & Automation will be responsible for partnering closely with business to understand business pain points & areas of opportunity within HR, Finance & Legal processes and operations. This role will design and develop automations and integrations to improve data accuracy, increase cost savings, increase productivity and overall efficiency. This position will work cross functionally with other departments including product and engineering to collaborate on larger company wide initiatives. This position will act as a senior member of the team and provide technical subject matter expertise and advisory to business to implement best in class solutions for security, performance and operational efficiency. \nThis role will require thorough knowledge and understanding of industry best practices and modern day tools and methodologies in the Integration & automation domain. This individual has prior experience implementing and administering critical integrations and automation solutions for key business functions. This role reports to the Sr. Director of IT. \nThe Opportunity\n\n\n\n* Groom requirements and conduct design sessions with business for integrations and automations.\n\n* Architect and build integrations between core internal and external applications (including on-prem and on-cloud)\n\n* Develop automations to improve efficiency within business operations and save man hours.\n\n* Design, develop, test, launch and support critical integrations and automations within enterprise applications.\n\n* Work with IT & Security teams to understand automation needs to build automations to scale operations and enable the highest level of security.\n\n* Advise on best practices and recommend best in-class integrations design and solutions considering scale, security, and performance.\n\n* Work cross functionally with various engineerings and business teams to document details process and data flows to align on design\n\n* Build prototypes and demo with customers to show value and gather feedback for evolving the solution.\n\n* Explore new technologies and embrace change in tech stack and develop methodologies for continuous improvement\n\n* Document and publish integration and automation artifacts for various projects.\n\n* Asses existing technologies and applications to assess & identify tooling for future.\n\n\n\nSkills you should HODL\n\n\n\n* 4+ years of experience with developing complex integration solutions in a hybrid environment involving iPaaS solutions (like OneCloud, Dell Boomi, Mulesoft, etc) and on-prem solutions to integrate between critical apps and systems for various business departments.\n\n* 3+ years of experience building automations in a hybrid environment involving on-cloud solutions (like Zapier, Workato, etc) and on-prem solutions to automate critical business operations and procedures for various business units like HR, Finance, Product, etc\n\n* Experience building integration jobs or automations using Golang and Typescript/NodeJS\n\n* Programming experience with Python and/or Rust a plus\n\n* Point-to-Point Integration, Hub-and-Spoke Integration, and Bus Architecture will be a plus.\n\n* Experience working with cloud computing platforms via Infrastructure As Code (IaC) such as Pulumi, Terraform, and/or Ansible will be a plus\n\n* Deep technical knowledge and experience with enterprise level technologies (HRIS systems, identity management, secure messaging and collaboration, data storage and organization applications) is critical.\n\n* Experience building and managing enterprise software and integration solutions for 3rd party enterprise products will be critical.\n\n* Prior experience managing a team of engineers will be a plus\n\n* Experience with RPA or Generative AI is a plus.\n\n* Experience building and managing test automation for critical applications will be a plus.\n\n* Excellent communication and interpersonal skills are essential.\n\n\nKraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgable about crypto!\nAs an equal opportunity employer, we donโt tolerate discrimination or harassment of any kind. Whether thatโs based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws. \nStay in the know\nFollow us on TwitterLearn on the Kraken BlogConnect on LinkedIn\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Crypto, Python, Finance, Cloud, Senior, Golang and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote Anywhere
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.