\nWho we are:\n\nRaft (https://TeamRaft.com) is a customer-obsessed non-traditional small business with a purposeful focus on Distributed Data Systems, Platforms at Scale, and Complex Application Development, with headquarters in McLean, VA. Our range of clients includes innovative federal and public agencies leveraging design thinking, cutting-edge tech stack, and cloud-native ecosystem. We build digital solutions that impact the lives of millions of Americans.\nWeโre looking for an experienced DevSecOps Engineer to support our customers and join our passionate team of high-impact problem solvers.\n\nAbout the role: \n\nAs a DevSecOps Engineer, you are responsible for actively developing and implementing end-to-end cluster and application lifecycles, ensuring processes are both secure and efficient. Collaborate with clients, designing and implementing Kubernetes or Docker solutions. Take part in building CI/CD pipelines, ensuring a smooth software update process. Furthermore, actively participate in the application of GitOps principles for software delivery.\n\nWhat we are looking for:\n\n\n3+ years of Python software development experience\n\n3+ years of Bash scripting experience\n\n3+ years of Linux System Administration experience\n\n3+ years of hands-on experience with Kubernetes or Docker, provisioning production clustersโฏand maintaining their compliance\n\n3+ years of automated DevOps (ex. Building pipelines) or cloud infrastructure (ex. AWS, Azure) experience\n\n3+ years of automating builds for Docker or OCI containers\n\nSkilled in building Gitlab CICD pipeline templates and jobs\n\nAbility to implement and improve development and security best practices by building necessary CICD pipeline jobs (linting, SCA, SAST, vulnerability scanning)\n\nFamiliarity with cosign for signing and verifying container images and attestations\n\nExperienced in developing, troubleshooting, maintaining build automation for applications and images, and developing end-to-end application testing\n\nProven background in software systems development via CI/CD pipelines (Gitlab Pipelines)\n\nExposure to Agile, DevOps, and DevSecOps methodologies, practices, and culture\n\nStrong knowledge of version control systems like Gitlab, with the ability to maintain/operate Gitlab in the cloud or on-premises\n\nProblem-solving aptitude\n\nExpertise in designing and implementing enterprise-grade, scalable cloud-based services to support our development teams\n\nFamiliarity with GitOps tools, i.e., FluxCD, ArgoCD\n\nKnowledge of Log Management and Analytics tools such as PLG/Splunk/ELK\n\nBackground in building, deployment, release automation, or orchestration\n\nProficiency in writing Unit/Integration/e2e testing (e.g., Junit, Cypress, Selenium)\n\nSkilled in Infrastructure as Code (e.g., Terraform, Ansible, etc.)\n\nObtain Security+ within the first 90 days of employment with Raft\n\n\n\n\nHighly preferred: \n\n\n2+ years of experience with Spring Boot (Java development), in particular the Spring Cloud Gateway library.\n\nProficiency in building CLI applications in Python\n\nSkilled in working with file scanning applications (e.g., virus scanning tools)\n\nExperience using FastAPI to develop web applications in Python\n\nExpertise in implementing Sigstore and Cosign to sign container images as well as SBOMs\n\nSkilled in hardening application containers\n\nProven background with Istio service mesh\n\nBackground in defensive or offensive cyber capability development\n\nPassionate about automation, system efficiency, and security\n\n\n\n\nClearance Requirements:\n\n\nActive Top Secret security clearance with SCI Eligibility\n\n\n\n\nWork Type: \n\n\nRemote (Local to San Antonio, TX)\n\nMay require up to 10% travel\n\n\n\n\nSalary Range:\n\n\n$90,000 - $170,000\n\nThe determination of compensation is predicated upon a candidate's comprehensive experience, demonstrated skill, and proven abilities\n\n\n\n\nWhat we will offer you: \n\n\nHighly competitive salary\n\nFully covered healthcare, dental, and vision coverage\n\n401(k) and company match\n\nTake as you need PTO + 11 paid holidays\n\nEducation & training benefits\n\nAnnual budget for your tech/gadgets needs\n\nMonthly box of yummy snacks to eat while doing meaningful work\n\nRemote, hybrid, and flexible work options\n\nTeam off-site in fun places!\n\nGenerous Referral Bonuses\n\nAnd More!\n\n\n\n\nOur Vision Statement: \n\nWe bridge the gap between humans and data through radical transparency and our obsession with the mission. \n\nOur Customer Obsession: \n\nWe will approach every deliverable like it's a product. We will adopt a customer-obsessed mentality. As we grow, and our footprint becomes larger, teams and employees will treat each other not only as teammates but customers. We must live the customer-obsessed mindset, always. This will help us scale and it will translate to the interactions that our Rafters have with their clients and other product teams that they integrate with. Our culture will enable our success and set us apart from other companies.\n\nHow do we get there? \n\nPublic-sector modernization is critical for us to live in a better world. We, at Raft, want to innovate and solve complex problems. And, if we are successful, our generation and the ones that follow us will live in a delightful, efficient, and accessible world where out-of-box thinking, and collaboration is a norm. \n\nRaftโs core philosophy is Ubuntu: I Am, Because We are. We support our โnadiโ by elevating the other Rafters. We work as a hyper collaborative team where each team member brings a unique perspective, adding value that did not exist before. People make Raft special. We celebrate each other and our cognitive and cultural diversity. We are devoted to our practice of innovation and collaboration. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Docker, DevOps, Cloud and Engineer jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Antonio, Texas, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nBy making evidence the heart of security, we help customers stay ahead of ever-changing cyber-attacks. \n\nCorelight is the cybersecurity company that transforms network and cloud activity into evidence. Evidence that elite defenders use to proactively hunt for threats, accelerate response to cyber incidents, gain complete network visibility and create powerful analytics using machine-learning and behavioral analysis tools. Easily deployed, and available in traditional and SaaS-based formats, Corelight is the fastest-growing Network Detection and Response (NDR) platform in the industry. And we are the only NDR platform that leverages the power of Open Source projects in addition to our own technology to deliver Intrusion Detection (IDS), Network Security Monitoring (NSM), and Smart PCAP solutions. We sell to some of the most sensitive, mission critical large enterprises and government agencies in the world.\n\nThis position will perform functions essential to the health and growth of our product partners through monitoring automation, reducing toil, and building resilience into our infrastructure and APIโs. This role is an excellent opportunity for someone passionate and committed to designing, building, and maintaining high-performance Linux and cloud-based systems and communications infrastructure. \n\nThe core pillars of cloud operations are:\n\n\nMaintain and build external and internal cloud services achieving agreed-upon SLI, SLO, and SLA\n\nAssist in root administration of complex cloud environments (primarily AWS)\n\nEvangelize and implement best practices like Automation, Continuous Integration, and Deployment (CI/CD), Monitoring and Testing\n\nEncourage automated secret management\n\nBuild and maintain systems that are fault-tolerant and resilient.\n\n\n\n\nYour Role and Responsibilities\n\n\nParticipate in the design, development, testing, and maintenance of cloud services.\n\nHave knowledge and assist leadership in, account, and network administration best practices across all environments\n\nProvide advice and assistance on cloud architecture and APIs\n\nImplement automation, disaster recovery, and system resilience best practices.\n\nImplement improvements in architecture and design, facilitate and perform various tests and reviews of our code, products, services, and infrastructure.\n\n\n\n\nMinimum Qualifications\n\n\n3+ years of software engineering experience in Go or a similarly statically typed language. \n\n3+ years in operations engineering.\n\nExperience in application and database engineering for scale.\n\nExperience in application support practices and procedures for critical platforms.\n\nExperience in application monitoring and profiling.\n\nPractical Experience with Infrastructure as code such as Terraform and Ansible.\n\nFamiliarity with AWS, particularly Lambda, APIGW, S3, VPC, Route 53, IAM, and CloudFront; familiarity with AWS SDKs and the AWS-CLI.\n\nUnderstanding of networking and NetSec best practices.\n\nEffective communication skills, team spirit, problem-solving, positive attitude.\n\nBachelors or Masters degree in Computer Science or related fields, or equivalent experience.\n\n\n\n\nPreferred Skills\n\n\nExperience in scripting languages like Python and JavaScript. \n\nWorking knowledge of GCP and AZURE.\n\nFamiliarity with containerized architectures using Docker and Kubernetes.\n\nFamiliarity with machine learning infrastructure.\n\n\n\n\nWe are proud of our culture and values - driving diversity of background and thought, low-ego results, applied curiosity and tireless service to our customers and community. Corelight is committed to a geographically dispersed yet connected employee base with employees working from home and office locations around the world. Fueled by an accelerating revenue stream, and investments from top-tier venture capital organizations such as Crowdstrike, Accel and Insight - we are rapidly expanding our team. \n\nCheck us out at www.corelight.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Cloud, Engineer and Linux jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.\nYou Will:\n\n\n* Architect and develop data pipelines to optimize performance, quality, and scalability\n\n* Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources\n\n* Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake\n\n* Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance\n\n* Orchestrate sophisticated data flow patterns across a variety of disparate tooling\n\n* Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics\n\n* Partner with the rest of the Data Platform team to set best practices and ensure the execution of them\n\n* Partner with the analytics engineers to ensure the performance and reliability of our data sources\n\n* Partner with machine learning engineers to deploy predictive models\n\n* Partner with the legal and security teams to build frameworks and implement data compliance and security policies\n\n* Partner with DevOps to build IaC and CI/CD pipelines\n\n* Support code versioning and code deployments for data Pipelines\n\n\n\nYou Have:\n\n\n* 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages\n\n* Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed\n\n* Demonstrated experience writing complex, highly optimized SQL queries across large data sets\n\n* Experience with cloud technologies such as AWS and/or Google Cloud Platform\n\n* Experience with Databricks platform\n\n* Experience with IaC technologies like Terraform\n\n* Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres\n\n* Experience building event streaming pipelines using Kafka/Confluent Kafka\n\n* Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker\n\n* Experience with containers and container orchestration tools such as Docker or Kubernetes\n\n* Experience with Machine Learning & MLOps\n\n* Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)\n\n* Thorough understanding of SDLC and Agile frameworks\n\n* Project management skills and a demonstrated ability to work autonomously\n\n\n\nNice to Have:\n\n\n* Experience building data models using dbt\n\n* Experience with Javascript and event tracking tools like GTM\n\n* Experience designing and developing systems with desired SLAs and data quality metrics\n\n* Experience with microservice architecture\n\n* Experience architecting an enterprise-grade data platform\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Testing, DevOps, JavaScript, Cloud, API, Senior, Legal and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nIf you want to advance your career in a progressive, fast-paced (fully remote) setting, alongside a passionate team dedicated to tackling intricate challenges through cutting-edge technologies, then we'd love to hear from you!\n\n\nWe are a technology-oriented consulting company specializing in developing near-real-time, highly available analytics platforms in the cloud. We have clients across a variety of industries, including healthcare, fintech, motor vehicle manufacturing, and major media.\n\n\nWe have established a robust professional bond with our offshore staff and desire to enhance our engineering team in South America. At present, we are actively seeking engineers located in South America.\n\n\nRecruitment Preferences:\n\n\nWe appreciate the interest of recruitment agencies in our job openings, but at this time, we are not accepting submissions from third-party recruitment agencies or consultants. We kindly request that agencies refrain from contacting us regarding this position.\n\n\n\nBenefits you will receive if hired:\n* Professional Development (Certifications, career growth, training)\n* Full-Time, 40 hours per week; EST hours\n* Competitive Pay (see below)\n\n\n\nRoles and Responsibilities: \n* As a key member of ThorTechโs team, you will play a vital role in working to support multiple mission-critical business applications in healthcare, fintech, media, and other industries\n* You will be wearing multiple hats across engineering disciplines, including DevOps, SRE, and some application and development support, with a heavy emphasis on AWS as the underlying platform\n* The ideal candidate will be a software engineer who has a strong passion for infrastructure automation, cyber security, and containers and is passionate about learning new technology. You have experience collaborating with remote team members and are excited about the opportunity to consistently sharpen your technical skills\n\n\n\n\nQualifications:\n* 4+ years of professional experience with infrastructure automation, containers, cybersecurity, and software development \n* 2 + years of experience programming in Python (or other mainstream programming language)\n* Ability to identify and remediate technical problems on the fly\n* Excellent written and verbal communication skills\n* Self-driven attitude with a passion for solving challenging problems\n\n\n\nGeneral Understanding and Experience with:\n* Linux and Windows Management\n* Infrastructure as Code tooling (e.g. CloudFormation, Terraform, Pulumi, CDK)\n* SDLC Automation such as GitHub Actions, Jenkins, CircleCI, etc\n* Creating and maintaining CI/CD pipelines\n* Cloud Platforms (e.g. AWS, GCP)\n* Network programming; familiarity with HTTP, TCP/IP, and security groups\n* Observability tooling (e.g. Datadog, Splunk, Grafana, etc)\n* Building and maintaining Docker Images\n* Container Management Platforms (e.g. Kubernetes, ECS)\n* Scripting and automation tooling (e.g. bash, PowerShell)\n* Cloud Architecture and Their Best Practices\n\n\n\n\n$40 - $50 an hourThorTech Solutions is a New York-based software engineering and cloud consulting firm that has serviced clients for over 20 years. Our founders are engineers who thrive on tackling seemingly impossible problems. Our company culture reflects this passion for innovation, creativity, and constantly pushing ourselves to overcome challenges.\n\nThorTech Solutions is an Equal Opportunity-Affirmative Action Employer Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Consulting, Docker, DevOps, Cloud and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSouth America
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMTguMjIzLjIzOS4xNQ==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Stacktical and want to re-open this job? Use the edit link in the email when you posted the job!
Stacktical is a Predictive Scalability Testing platform.\nIt ensures our customers design and ship softwares that always scale to the maximum of their ability and with minimum footprint.\nThe Stacktical Site Reliability Engineer is responsible for helping our customers engineer CI/CD pipeline around system testing practices that involve Stacktical.\nLike the rest of the team, they also actively participate in building the Stacktical platform itself.\nWe are looking for a skilled DevOps and Site Reliability Engineer, expert in Scalability, that is excited about the vision of using Predictive Analytics and AI to reinvent the field.\nWith a long-standing passion for automating your work and the work of others, you also understand how Software as a Service is increasingly empowering companies to do just that.\nYou can justify previous experiences in startups and youโre capable of working remotely, with great efficiency, in fast-paced, demanding environments. Ideally, youโd have a proven track record of working remotely for 2+ years.\nNeedless to say, you fully embrace the working philosophy of digital nomadism weโre developing at Stacktical and both the benefits and responsibilities that come with it.\nYour role and responsibilities includes the following :\n- Architecture, implementation and maintenance of server clusters, API and microservices, including critical production environments, in Cloud and other hosting configurations (dedicated, vps and shared).\n- Ensure the availability, performance and scalability of applications in respect of proven design and architecture best practices.\n- Design and execute Scalability strategies that ensure the scalability and the elasticity of the infrastructure.\n- Manage a portfolio of Softwares, their Development Life Cycle and optimize their Continuous Integration and Delivery workflows (CI/CD).\n- Automate the Quality & Reliability Testing of applications (Unit Tests, Integration Tests, E2E Tests, Performance and Scalability Tests).\n## Skills we are looking for\n- A 50-50 mix between Software Development and System Administration experience\n- Proficiency in Node.js, Python, R, Erlang (Elixir) and / or Go\n- Hands on experience in NoSQL / SQL database optimization (slow queries indexing, sharding, clustering)\n- Hands on experience in administering high availability and high performance environments, as well as managing large-scale deployments of traffic-heavy applications.\n- Extensive knowledge of Cloud Computing concepts, technologies and providers (Amazon AWS, Google Cloud Platform, Microsoft Azureโฆ).\n- A strong ability to design and execute cutting edge System Testing strategies (smoke tests, performance/load tests, regression tests, capacity tests).\n- Excellent understanding of Scalability processes and techniques.\n- Good grasp of Scalability, Elasticity concepts and creative Auto Scaling strategies (Auto Scaling Groups management, API-based scheduling).\n- Hands on experience with Docker and Docker orchestration tools like Kubernetes and their corresponding provider management services (Amazon ECS, Google Container Engine, Azure Container Service...).\n- Hands on experience with leading Infrastructure as Code SCM tools like Terraform and Ansible\n- Proven ability to work remotely with teams of various sizes in same/different timezones, from anywhere and still remain highly motivated, productive, and organized.\n- Excellent English communication skills, including verbal, written, and presentation. Great email and Instant Messaging (Slack) proficiency.\nWeโre looking for a self learner always willing to step out her/his comfort zone to become better. An upright individual, ready to write the first and many chapters of the Stacktical story with us.\n## Life at our virtual office\nOur headquarters are in Paris but our offices and our clients are everywhere in the World.\nWeโre a fully distributed company with a 100% remote workforce. So pretty much everything happens on Slack and various other collaborative tools.\n## Remote work at Stacktical\nRemote work at Stacktical requires you to forge a contract with the Stacktical company, using your own billing structure.\nThat means you would either need to own a company or leverage a compatible legal status.\nLabour laws can be largely different from a country to another and we are not (yet) in a position to comply with the local requirements of all our employees.\nJust because you will be a contractor doesnโt make you less of a fully-fledged employee of Stacktical. In fact, even our founders are contractors too.\n## Compensation Package\n#### Fixed-price contract\nYour contract fixed-price is engineered around your expectations, our possibilities and the overall implications of remote work.\nLetโs have a transparent chat about it.\n#### Stock Options\nYes, joining Stacktical means you are entrusted to own part of the company. \n\nPlease mention the words **VEHICLE ORBIT AUNT** when applying to show you read the job post completely (#RMTguMjIzLjIzOS4xNQ==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, JavaScript, Cloud, Erlang, Python, Node, API, Admin, Engineer, Apache, Nginx, Sys Admin, Docker, English, NoSQL, Microsoft and Legal jobs that are similar:\n\n
$70,000 — $120,000/year\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.