The Role\n\nLeafLink is seeking a Principal Data Engineer to join our remote-friendly team, headquartered in NYC, who is passionate about working with teams that solve interesting, large-scale problems rapidly. This impactful position enables LeafLink to coordinate and integrate with 3rd party data sets and proprietary data to produce valuable insights into business and customer needs. As a member of our engineering team, you will be in a position to have a direct and lasting impact everywhere in the company. Your contribution will be immediate and have positive ripple effects across not just our business, but also the business of each of our customers. \n\nLeafLink is currently tackling a large-scale platform overhaul that will strengthen our position as a technical leader within the industry. As such, this role has the opportunity to help lead, shape, and grow the data and machine learning architecture within our platform, as well as work with new and growing technologies. Itโs a very exciting time to join our engineering team!\n\nIdeal candidates for this position should possess a keen mind for solving tough problems with the ideal solution, partnering effectively with various team members along the way. They should be deeply passionate about organizing and managing data at scale for various use cases. They should be personable, efficient, flexible, and communicative, have a strong desire to implement change, grow, mature, and have a passion and love for their work. This role comes with the opportunity to be a high performer within a fast-paced, dynamic, and quickly growing department in all areas.\nWhat Youโll Be Doing\n\n\n* Audit, design, and maintain a high-performing, modular, and optimal data pipeline architecture for structured and unstructured use cases around machine learning, reporting, and analytics \n\n* Design and co-build with Cloud and DevOps the infrastructure and operations required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, python, and AWS cloud technologies\n\n* Keep up to date on modern technologies and trends and advocate for their inclusion within products when it makes sense\n\n* Analyze and evaluate existing solutions and make decisions on whether to extend or refactor as needed with a major focus on improving our pipeline and reporting performance\n\n* Work with the CTO and department stakeholders to properly plan short and long-term goals, and define and execute a technical roadmap that continues to evolve LeafLinkโs data capabilities and functionality to meet the needs of our Business and Product Vision.\n\n* Work collaboratively with multiple cross-functional agile teams to help deliver end-to-end products and features enabled by our data pipeline, seeing them through from conception to delivery\n\n* Help define, document, evolve, and evangelize high engineering standards, best practices, tenants, and data management & governance across data and analytics engineering\n\n* Move quickly and intelligently - seeing technical debt as your nemesis and eliminating risk\n\n* Effectively communicate the complexity of your work to technical and non-technical audiences through non-written and written mediums\n\n* Design, develop, and test data models in our data warehouse that enable data and analytics processes\n\n* Help define and build our enterprise data catalog and dictionary \n\n* Troubleshoot, diagnose and address data quality issues quickly and effectively while implementing solutions to combat this at scale, including improved quality controls and observability and monitoring\n\n* Provide mentorship and growth to our BE and Data engineers while creating repeatable and scalable solutions and patterns\n\n\n\nWhat Youโll Bring to the Team \n\n\n* Minimum of 10 years experience in a professional working environment on a data or engineering team\n\n* Advanced working SQL knowledge and experience working with relational and non-relational databases, query authoring (SQL) as well as working familiarity with a variety of data stores\n\n* Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.\n\n* Expertise writing Python processing jobs to ingest a variety of structured and unstructured data received from various sources & formats such as Rest APIs, Flat Files, and Logs with the ability to support and scale to both smaller and larger dataset ingestions \n\n* They should also have experience using the following software/tools:\n\n\n\n* Experience with object-oriented/object function scripting in Python and data processing libraries such as requests, pandas, sqlalchemy\n\n* Experience with relational SQL and NoSQL databases, such as Redshift or comparable cloud-based OLAP databases such as Snowflake\n\n* Experience with data pipeline and workflow management tools: Airflow\n\n* Experience with AWS cloud services\n\n* Hands-on experience with technologies such as Dynamo, Terraform, Kubernetes, Fivetran, and dbt is a strong plus\n\n* Experience with designing and implementing machine learning enablement tools and infrastructure\n\n* Experience leveraging API-based LLM models, dynamic prompt generation, fine-tuning\n\n\n\n* Comfortable working in a fast-paced growth business with many collaborators and quickly evolving business needs\n\n* Individual contributor leadership to our data and analytics engineers and specialization on our current Platform Engineering team around data enterprise architecture and best practices\n\n* Consistency and standards to how we visualize and use our enterprise data at LeafLink through helping us define our first Data Dictionary and Catalog \n\n\n\nLeafLink Perks & Benefits\n\n\n* Flexible PTO - youโre going to be working hard so enjoy time off with no cap!\n\n* A robust stock option plan to give our employees a direct stake in LeafLinkโs success\n\n* 5 Days of Volunteer Time Off (VTO) - giving back is important to us and we want our employees to prioritize cultivating a better community\n\n* Competitive compensation and 401k match\n\n* Comprehensive health coverage (medical, dental, vision)\n\n* Commuter Benefits through our Flexible Spending Account\n\n\n\n\nLeafLinkโs employee-centric culture has earned us a coveted spot on BuiltInNYCโs Best Places to Work for in 2021 list. Learn more about LeafLinkโs history and the path to our First Billion in Wholesale Cannabis Orders here. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, NoSQL and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nNew York City, New York, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nRestaurant365 is a SaaS company disrupting the restaurant industry! Our cloud-based platform provides a unique, centralized solution for accounting and back-office operations for restaurants. Restaurant365โs culture is focused on empowering team members to produce top-notch results while elevating their skills. Weโre constantly evolving and improving to make sure we are and always will be โBest in Classโ ... and we want that for you too!\n\n\nRestaurant365 is looking for an experienced Data Engineer to join our data warehouse team thatenables the flow of information and analytics across the company. The Data Engineer will participate in the engineering of our enterprise data lake, data warehouse, and analytic solutions. This is a key role on a highly visible team that will partner across the organization with business and technical stakeholders to create the objects and data pipelines used for insights, analysis, executive reporting, and machine learning. You will have the exciting opportunity to shape and grow with a high performingteam and the modern data foundation that enables the data-driven culture to fuel the companyโs growth. \n\n\n\nHow you'll add value: \n* Participate in the overall architecture, engineering, and operations of a modern data warehouse and analytics platforms. \n* Design and develop the objects in the Data Lake and EDW that serve as core building blocks for the semantic layer and datasets used for reporting and analytics across the enterprise. \n* Develop data pipelines, transformations (ETL/ELT), orchestration, and job controls using repeatable software development processes, quality assurance, release management, and monitoring capabilities. \n* Partner with internal business and technology stakeholders to understand their needs and then design, build and monitor pipelines that meet the companyโs growing business needs. \n* Look for opportunities for continuous improvements that automate workflows, reduce manual processes, reduce operational costs, uphold SLAs, and ensure scalability. \n* Use an automated observability framework for ensuring the reliability of data quality, data integrity, and master data management. \n* Partner closely with peers in Product, Engineering, Enterprise Technology, and InfoSec teams on the shared enterprise needs of a data lake, data warehouse, semantic layer, transformation tools, BI tools, and machine learning. \n* Partner closely with peers in Business Intelligence, Data Science, and SMEs in partnering business units o translate analytics and business requirements into SQL and data structures \n* Responsible for ensuring platforms, products, and services are delivered with operational excellence and rigorous adherence to ITSM process and InfoSec policies. \n* Adopt and follow sound Agile practices for the delivery of data engineering and analytics solutions. \n* Create documentation for reference, process, data products, and data infrastructure \n* Embrace ambiguity and other duties as assigned. \n\n\n\nWhat you'll need to be successful in this role: \n* 3-5 years of engineering experience in enterprise data warehousing, data engineering, business intelligence, and delivering analytics solutions \n* 1-2 years of SaaS industry experience required \n* Deep understanding of current technologies and design patterns for data warehousing, data pipelines, data modeling, analytics, visualization, and machine learning (e.g. Kimball methodology) \n* Solid understanding of modern distributed data architectures, data pipelines, API pub/sub services \n* Experience engineering for SLA-driven data operations with responsibility for uptime, delivery, consistency, scalability, and continuous improvement of data infrastructure \n* Ability to understand and translate business requirements into data/analytic solutions \n* Extensive experience with Agile development methodologies \n* Prior experience with at least one: Snowflake, Big Query, Synapse, Data bricks, or Redshift \n* Highly proficient in both SQL and Python for data manipulation and assembly of Airflow DAGโs. \n* Experience with cloud administration and DevOps best practices on AWS and GCP and/or general cloud architecture best practices, with accountability cloud cost management \n* Strong interpersonal, leadership and communication skills, with the ability to relate technical solutions to business terminology and goals \n* Ability to work independently in a remote culture and across many time zones and outsourced partners, likely CT or ET \n\n\n\nR365 Team Member Benefits & Compensation\n* This position has a salary range of $94K-$130K. The above range represents the expected salary range for this position. The actual salary may vary based upon several factors, including, but not limited to, relevant skills/experience, time in the role, business line, and geographic location. Restaurant365 focuses on equitable pay for our team and aims for transparency with our pay practices. \n* Comprehensive medical benefits, 100% paid for employee\n* 401k + matching\n* Equity Option Grant\n* Unlimited PTO + Company holidays\n* Wellness initiatives\n\n\n#BI-Remote\n\n\n$90,000 - $130,000 a year\n\nR365 is an Equal Opportunity Employer and we encourage all forward-thinkers who embrace change and possess a positive attitude to apply. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, InfoSec, Python, Accounting, DevOps, Cloud, API and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Basil Systems and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nBasil Systems is a healthcare-focused start-up delivering a SaaS application that leverages product, market and regulatory data to support all types of players in the medical device and pharmaceutical industries. Our goal is to help our customers deliver safe and innovative healthcare products to patients and consumers, while reducing the time to get to market. After 14 months of stealth development, we soft launched in late December 2020 to a very positive market reception and customers.\n\nIn short, Basil is a funded, revenue generating, and growth-oriented company – and we are actively seeking talented engineers to build and deliver an extensive roadmap of amazing features. We need a Sr. backend engineer excited by multifaceted data challenges, and with the ability to design, build, and deploy stable and scalable production software.\n\nSr. Backend Engineer\n\nWe are looking for a self-starter; you’ll spend your time building reliable backend services and solving complex data processing problems. We would like you to have:\n\n\n* 5+ years professional experience as backend or data engineer\n\n* Very strong skills experience with modern Python (3.8+), MongoDB (particularly 4.2+), and MySQL or MariaDB\n\n* Significant experience with AWS services, operations and architecture, especially with respect to data heavy applications\n\n* Spent time building and managing ETL pipelines\n\n* Familiarity with recent versions of Elasticsearch\n\n* Strong DevOps experience, with a commitment to engineering best practices \n\n\n\n\nNot all of these are required, but ideally you have experience with:\n\n\n* Typescript / ES (Node)\n\n* Golang\n\n* Docker & Kubernetes\n\n* Terraform\n\n* Solid understanding of modern security practices\n\n* CI/CD\n\n\n\n\nSome nice to haves\n\n\n* You have worked with, reconciled and normalized disparate data sets\n\n* Natural language processing (NLP) experience\n\n* Strong data modeling skills, and an eye for what the data means in a business and product context\n\n* Interest in machine learning, and experience building ML-driven or algorithmic data products\n\n* Some exposure to product analytics data pipelines and basic understanding of AB testing\n\n\n\n\nFinally\n\nWe are a distributed team headquartered in Boston, with an office in Nashville, TN. However, our culture allows flexibility as to when, where and how you work best – and actively employ and support remote engineers.\n\nBenefits include:\n\n\n* Competitive salary\n\n* Health and vision\n\n* An attractive equity package \n\n\n\n\nBasil supports and encourages building a work environment that is diverse, inclusive, and safe for everyone -- and we welcome all applicants. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Backend, DevOps, Python, SaaS and Medical jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for YouGov and want to re-open this job? Use the edit link in the email when you posted the job!
\nCrunch.i, part of the YouGov PLC is a market-defining company in the analytics SaaS marketplace. We’re a company on the rise. We’ve built a revolutionary platform that transforms our customers’ ability to drive insight from market research and survey data. We offer a complete survey data analysis platform that allows market researchers, analysts, and marketers to collaborate in a secure, cloud-based environment, using a simple, intuitive drag-and-drop interface to prepare, analyze, visualize and deliver survey data and analysis. Quite simply, Crunch provides the quickest and easiest way for anyone, from CMO to PhD, with zero training, to analyze survey data. Users create tables, charts, graphs and maps. They filter, and slice-and-dice survey data directly in their browser.\n\nOur start-up culture is casual, respectful of each other’s varied backgrounds and lives, and high-energy because of our shared dedication to our product and our mission. We are loyal to each other and our company. We value work/life balance, efficiency, simplicity, and fantastic customer service! Crunch has no offices and fully embraces a 100% remote culture. We have 40 employees spread across 5 continents. Remote work at Crunch is flexible and largely independent, yet highly cooperative.\n\nWe are hiring a DevOps Lead to help expand our platform and operations excellence. We are inviting you to join our small, fully remote team of developers and operators helping make our platform faster, more secure, and more reliable. You will be self-motivated and disciplined in order to work with our fully distributed team.\n\nWe are looking for someone who is a quick study, who is eager to learn and grow with us, and who has experience in DevOps and Agile cultures. At Crunch, we believe in learning together: we recognize that we don’t have all the answers, and we try to ask each other the right questions. As Crunch employees are completely distributed, it’s crucial that you can work well independently, and keep yourself motivated and focused.\n\nOur Stack:\n\nWe currently run our in-house production Python code against Redis, MongoDB, and ElasticSearch services. We proxy API requests through NGINX, load balance with ELBs, and deploy our React web application to AWS CloudFront CDN. Our current CI/CD process is built around GitHub, Jenkins, BlueOcean including unit, integration, and end to end tests and automated system deployments. We deploy to auto-scaling Groups using Ansible and Cloud-Init.\n\nIn the future, all or part of our platform may be deployed via DroneCI, Kubernetes, nginx ingress, Helm, and Spinnaker.\n\nWhat you'll do:\n\nAs a Leader:\n\n\n* Manage and lead a team of Cloud Operations Engineers who are tasked with ensuring our uptime guarantees to our customer base.\n\n* Scale the worldwide Cloud Operations Engineering team with the strategic implementation of new processes and tools.\n\n* Hire and ramp exceptional Cloud Operations Engineers.\n\n* Assist in scoping, designing and deploying systems that reduce Mean Time to Resolve for customer incidents.\n\n* Inform executive leadership and escalation management personnel of major outages\n\n* Compile and report KPIs across the full company.\n\n* Work with Sales Engineers to complete pre-sales questionnaires, and to gather customer use metrics.\n\n* Prioritize projects competing for human and computational resources to achieve organizational goals.\n\n\n\n\nAs an Engineer:\n\n\n* Monitor and detect emerging customer-facing incidents on the Crunch platform; assist in their proactive resolution, and work to prevent them from occurring.\n\n* Coordinate and participate in a weekly on-call rotation, where you will handle short term customer incidents (from direct surveillance or through alerts via our Technical Services Engineers).\n\n* Diagnose live incidents, differentiate between platform issues versus usage issues across the entire stack; hardware, software, application and network within physical datacenter and cloud-based environments, and take the first steps towards resolution.\n\n* Automate routine monitoring and troubleshooting tasks.\n\n* Cooperate with our product management and engineering organizations by identifying areas for improvement in the management of applications powering the Crunch infrastructure.\n\n* Provide consistent, high-quality feedback and recommendations to our product managers and development teams regarding product defects or recurring performance issues.\n\n* Be the owner of our platform. This includes everything from our cloud provider implementation to how we build, deploy and instrument our systems.\n\n* Drive improvements and advancements to the platform in areas such as container orchestration, service mesh, request/retry strategies.\n\n* Build frameworks and tools to empower safe, developer-led changes, automate the manual steps and provide insight into our complex system.\n\n* Work directly with software engineering and infrastructure leadership to enhance the performance, scalability and observability of resources of multiple applications and ensure that production hand off requirements are met and escalate issues.\n\n* Embed into SRE projects to stay close to the operational workflows and issues.\n\n* Evangelize the adoption of best practices in relation to performance and reliability across the organization.\n\n* Provide a solid operational foundation for building and maintaining successful SRE teams and processes.\n\n* Maintain project and operational workload statistics.\n\n* Promote a healthy and functional work environment.\n\n* Work with Security experts to do periodic penetration testing, and drive resolution for any issues discovered.\n\n* Liaise with IT and Security Team Leads to successfully complete cross-team projects, filling in for these Leads when necessary.\n\n* Administer a large portfolio of SaaS tools used throughout the company.\n\n\n\n\nQualifications:\n\n\n* Team Lead experience of an on-call DevOps, SRE, or Cloud Operations team (at least 2 years).\n\n* Experience recruiting, mentoring, and promoting high performing team members.\n\n* Experience being an on-call DevOps, SRE, or Cloud Operations engineer (at least 2 years).\n\n* Proven track record of designing, building, sizing, optimizing, and maintaining cloud infrastructure.\n\n* Proven experience developing software, CI/CD pipelines, automation, and managing production infrastructure in AWS.\n\n* Proven track record of designing, implementing, and maintaining full CI/CD pipelines in a cloud environment (Jenkins experience preferred).\n\n* Experience with containers and container orchestration tools (Docker, Kubernetes, Helm, traefik, Nginx ingress and Spinnaker experience preferred).\n\n* Expertise with Linux system administration (5 yrs) and networking technologies including IPv6.\n\n* Knowledgeable about a wide range of web and internet technologies.\n\n* Knowledge of NoSQL database operations and concepts.\n\n* Experience in monitoring, system performance data collection and analysis, and reporting.\n\n* Capability to write small programs/scripts to solve both short-term systems problems and to automate repetitive workflows (Python and Bash preferred).\n\n* Exceptional English communication and troubleshooting skills.\n\n* A keen interest in learning new things.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Executive, React, English, Elasticsearch, Cloud, NoSQL, Python, API, Sales, SaaS, Engineer, Nginx and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Selerity and want to re-open this job? Use the edit link in the email when you posted the job!
\nSummary:\n\nWe are looking for a Senior DevOps (Site Reliability) Engineer to join Selerity’s team, scaling up an A.I. driven analytics and recommendation platform and integrating it into enterprise workflows. Highly competitive compensation plus significant opportunities for professional growth and career advancement.\n\nEmployment Type: Contract or Full-time\n\nLocation is flexible: We have offices in New York City and Oak Park, Illinois (Chicago suburb) but about half of our team currently works remotely from various parts of Europe, North America, and Asia. \n\n\nJob Description:\n\nWant to change how the world engages with chat, research, social media, news, and data?\n\nSelerity has dominated ultra-low-latency data science in finance for almost a decade. Now our real-time content analytics and contextual recommendation platform is gaining broader traction in enterprise and media applications. We're tackling big challenges in predictive analytics, conversational interfaces, and workflow automation and need your help!\n\nWe’re looking for an experienced DevOps (Site Reliability) Engineer to join a major initiative at a critical point in our company’s growth. The majority of Selerity’s applications are developed in Java and C++ on Linux but knowledge of other languages (especially Python and JavaScript), platforms and levels of the stack is very helpful.\n\n\n\nMust-haves:\n\n * Possess a rock-solid background in Computer Science (minimum BS in Comp Sci or related field) + at least 5 years (ideally 10+) of challenging work experience.\n\n * Implementation of DevOps / SRE processes at scale including continuous integration (preferred: Jenkins), automated testing, and platform monitoring (preferred: JMX, Icinga, Grafana, Graphite).\n\n * Demonstrated proficiency building and modifying Java and C++ applications in Linux environments (using Git, SVN). \n\n * Significant operations expertise with the Ansible (preferred), Chef, or Puppet deployment automation system in a Cloud environment.\n\n * Direct experience in the design, implementation, and maintenance of SaaS APIs that are minimal, efficient, scalable, and supportable throughout their lifecycle (OpenLDAP).\n\n * Solid track record of making effective design decisions balancing near-term and long-term objectives.\n\n * Know when to use commercial or open-source solutions, when to delegate to a teammate, and when to roll up your sleeves and code it yourself.\n\n * Work effectively in agile teams with remote members; get stuff done with minimal guidance and zero BS, help others, and know when to ask for help.\n\n * Clearly communicate complex technical and product issues to non-technical team members, managers, clients, etc. \n\n\n\nNice-to-haves:\n\n * Proficiency with Cisco, Juniper, and other major network hardware platforms, as well as ISO layer 1 and 2 protocols.\n\n * Experience with Internet routing protocols such as BGP.\n\n * Implementation of software defined networking or other non-traditional networking paradigms.\n\n * Proficiency with SSL, TLS, PGP, and other standard crypto protocols and systems.\n\n * Full-stack development and operations experience with web apps on Node.js.\n\n * Experience with analytics visualization libraries.\n\n * Experience with large-scale analytics and machine learning technologies including TensorFlow/Sonnet, Torch, Caffe, Spark, Hadoop, cuDNN, etc.\n\n * Conversant with relational, column, object, and graph database fundamentals and strong practical experience in any of those paradigms.\n\n * Deep understanding of how to build software agents and conversational workflows.\n\n * Experience with additional modern programming languages (Python, Scala, …)\n\n\n\nOur stack:\n\n * Java, C++, Python, JavaScript/ECMAscript + Node, Angular, RequireJS, Electron, Scala, etc.\n\n * A variety of open source and in-house frameworks for natural language processing and machine learning including artificial neural networks / deep learning.\n\n * Hybrid of AWS (EC2, S3, RDS, R53) + dedicated datacenter network, server and GPU/coprocessor infrastructure.\n\n * Cassandra, Aurora plus in-house streaming analytics pipeline (similar to Apache Flink) and indexing/query engine (similar to ElasticSearch).\n\n * In-house messaging frameworks for low-latency (sub-microsecond sensitivity) multicast and global-scale TCP (similarities to protobufs/FixFast/zeromq/itch).\n\n * Ansible, Git, Subversion, PagerDuty, Icinga, Grafana, Observium, LDAP, Jenkins, Maven, Purify, VisualVM, Wireshark, Eclipse, Intellij.\n\nThis position offers a great opportunity to work with advanced technologies, collaborate with a top-notch, global team, and disrupt a highly visible, multi-billion-dollar market. \n\n\n\nCompensation:\n\nWe understand how to attract and retain the best talent and offer a competitive mix of salary, benefits and equity. We also understand how important it is for you to feel challenged, to have opportunities to learn new things, to have the flexibility to balance your work and personal life, and to know that your work has impact in the real world.\n\nWe have team members on four continents and we're adept at making remote workers feel like part of the team. If you join our NYC main office be sure to bring your Nerf toys, your drones and your maker gear - we’re into that stuff, too.\n\n\nInterview Process:\n\nIf you can see yourself at Selerity, send your resume and/or online profile (e.g. LinkedIn) to [email protected]. We’ll arrange a short introductory phone call and if it sounds like there’s a match we'll arrange for you to meet the team for a full interview. \n\nThe interview process lasts several hours and is sometimes split across two days on site, or about two weeks with remote interviews. It is intended to be challenging - but the developers you meet and the topics you’ll be asked to explain (and code!) should give you a clear sense of what it would be like to work at Selerity. \n\nWe value different perspectives and have built a team that reflects that diversity while maintaining the highest standards of excellence. You can rest assured that we welcome talented engineers regardless of their age, gender, sexual orientation, religion, ethnicity or national origin.\n\n\n\nRecruiters: Please note that we are not currently accepting referrals from recruiters for this position. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Crypto, Finance, Java, Cloud, Python, SaaS, Engineer, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.