Remote Principal Data Architect Customer Data Platform
\nWe have an opening for a Principal Data Engineer/Architect to collaborate with our Data Platform and Segmentation teams to build solutions that improve our ability to process and leverage data and improve our data integration velocity. As a Data Engineer/Architect, you will partner with various teams to develop business requirements to standardize the business input and develop a long-term vision. You'll play a pivotal role working with other Data Engineers who build and maintain the data pipelines and data lake that enable and accelerate Data Science, Machine Learning, and AI, as well as the engine for real-time segmentation and marketing automation within Constant Contact.\n\nWhat youโll do: \n\n\n* Work closely with Data Science/ML/AI teams to leverage and provide access to the vast data available for data-driven marketing insights for our customers\n\n* Work with cross-functional teams, define data strategies and leverage the latest technologies in data processing and data analytics\n\n* Design, implement, and build data models and pipelines that deliver data with measurable quality under the service level agreement\n\n* Design, develop, and deliver improvements to Constant Contact data integration practices, data analytics, and real-time stream processing\n\n* Work with the teams and stakeholders to scope and prioritize solutions\n\n* Establish rigorous engineering processes to ensure service quality, delivery of new capabilities, and continuously improve metrics.\n\n\n\n\nWho you are:\n\n\n* 8+ years experience in business analytics, data science, software development, data modeling, and/or data engineering work\n\n* 3+ years of experience creating high-quality data pipelines\n\n* Proficiency in Java, Python, and SQL\n\n* Strong understanding of OLAP concepts with experience with OLAP technologies such as ClickHouse, Druid, Pinot, or similar platforms\n\n* Familiarity with search technologies such as Elasticsearch for high performance real-time applications\n\n* Experience orchestrating data pipelines with technologies such as Airflow, Dagster, and/or NiFi\n\n* Familiarity with stream-processing frameworks such as Apache Flink\n\n* Experience with AWS cloud services including, but not limited to, Kinesis, Glue, S3, Lambda, API Gateway, DynamoDB, and Athena\n\n* Experience with Docker and Kubernetes \n\n* Certification in AWS Cloud (AWS Certified Solutions Architect or similar)\n\n\n\n#LI-HK1 #LI-Remote \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, Cloud, API and Marketing jobs that are similar:\n\n
$37,500 — $77,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nWaltham, Massachusetts, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nABOUT THRIVE MARKET \n\n\nThrive Market was founded in 2014 with a mission to make healthy and sustainable living easy and affordable for everyone. As an online, membership-based market, we deliver the highest quality healthy, and sustainable products at member-only prices. Every day, we help our 1.4M+ Members find better products, support better brands, and build a better world in the process. We are a profitable, half-billion-dollar revenue business proving that mission-focused companies can succeed. We are also a Certified B Corporation, recently became a Public Benefit Corporation, and are a Climate Neutral Certified company. Join us as we bring healthy and sustainable living to millions of Americans in the years to come.\n\n\nTHE ROLE \n\n\nAt Thrive Market, our Platform team is in a constant state of innovation, crafting highly-performant and scalable services that empower our product teams to create exceptional customer experiences. Here, you will find the autonomy to drive critical initiatives that not only advance our mission but also propel our platform to new heights of excellence. We are seeking an experienced, hands-on DevOps Manager to lead our DevOps team. This person will drive our cloud infrastructure and the continuous delivery of our applications/services. The ideal candidate will have a strong background in software development, cloud infrastructure, and service operations, with a passion for empowering developers, streamlining processes, and automating away toil. \n\n\nIf you have read The Phoenix Project, Accelerate, The DevOps Handbook, and/or the Google SRE book, you will fit right in.\n\n\n\nRESPONSIBILITIES: \n* Team Leadership:\n* Manage, mentor, and grow a team of DevOps engineers. \n* Foster a collaborative and high-performance culture of continuous improvement within the team. \n* Level up the team through cross-training and strategic task assignments, continually challenging them to raise the bar. \n* Conduct regular performance reviews and provide feedback. \n* Work closely with development, QA, and IT teams to ensure smooth and reliable operation of software and systems. \n* Facilitate communication between teams and stakeholders. \n\n\n\n* Continuous Delivery:\n* Implement and manage continuous integration/continuous deployment (CI/CD) pipelines for the company. \n* Identify and remove bottlenecks in the software delivery process. \n* Promote best practices for software deployment. \n\n\n\n* Infrastructure Management:\n* Architect, improve, and administer AWS cloud infrastructure and services. \n* Leverage AWS-managed services wherever possible. \n* Create automated orchestration and deployment solutions using Puppet, Terraform, Docker, and Kubernetes. \n* Write and support scripts and automation using Python, Ruby, Bash, JavaScript, and Java. \n* Manage and optimize large-scale, cloud-hosted MySQL, Postgres, Redis, and Elasticsearch DBs.\n* Lead efforts for disaster recovery, capacity expansion, and system upgrades. \n* Oversee the provisioning, configuration, and monitoring of infrastructure. \n* Ensure system reliability, availability, and performance. \n\n\n\n* Automation: \n* Encourage DevOps engineers and application developers to streamline processes and automate their work whenever possible. \n* Develop and maintain automation for infrastructure provisioning, configuration management, and deployment. \n* Advocate for and implement automation in all aspects of the software lifecycle.\n\n\n\n* Security and Compliance: \n* Conduct security audits, vulnerability assessments, and system hardening initiatives, including maintaining PCI and SOX compliance. \n* Ensure that systems and processes adhere to industry best practices for security and compliance. \n\n\n\n* Monitoring and Incident Management:\n* Implement and manage monitoring tools to ensure system health and performance. \n* Lead incident response efforts and post-incident reviews to learn from failures and mitigate and prevent future occurrences. \n\n\n\n* Project Management: \n* Manage JIRA ticket creation, grooming, ticket/epic management, and documentation and keep it up to date.\n\n\n\nQUALIFICATIONS:\n* Bachelorโs degree in Computer Science, Engineering, or a related field (or equivalent experience).\n* 8+ years of experience in DevOps, SRE, system administration, or software development.\n* 2+ years of experience in a leadership or managerial role. \n* Experience in a startup environment and larger, scaled organizations is a plus!\n* Extensive experience building and maintaining complex cloud infrastructure on AWS. AWS Certified - AWS Solutions Architect or similar is a plus. \n* Extensive experience in a DevOps/SRE/Systems Engineering role, with strong experience developing applications in one of the following: Python, Ruby, Groovy, Go. \n* Strong experience in managing Linux-based infrastructure, preferably Debian/Ubuntu.\n* Deep knowledge and experience with Docker and Kubernetes.\n* Experience using and maintaining IaC/config management systems and deployment tools, including Terraform, Puppet, and Ansible.\n* Experience troubleshooting production problems and leading multiple teams to resolve large-scale production issues.\n* Proficiency with continuous integration and deployments using Jenkins, Concourse, and GitLab CI; knowledge of A/B, in-place, rolling, and phased deployment methodologies.\n* Understanding of monitoring and systems tools likeLogstash, Grafana, Prometheus, New Relic, etc.\n* Good understanding of networking fundamentals and protocols.\n* Basic understanding of storage architectures and design.\n* Good critical thinking and problem-solving skills.\n* Sense of ownership and pride in your performance and its impact on the organization's success.\n* Effective interpersonal and communication skills - we work cohesively and want to bring in folks who seek to drive unity.\n* Ability to independently lead the team and execute projects promptly. \n* Proficiency with Atlassian Jira and Confluence for project management and documentation is a plus.\n* Curious, hungry, proactive, results-oriented, and data-driven; thrives in fast-paced, team-oriented environments.\n\n\n\nBELONG TO A BETTER COMPANY:\n* Comprehensive health benefits (medical, dental, vision, life and disability)\n* Competitive salary (DOE) + equity\n* 401k plan\n* 9 Days of Observed Holiday\n* Flexible Paid Time Off\n* Subsidized ClassPass Membership with access to fitness classes and wellness and beauty experiences\n* Ability to work in our beautiful co-working space at WeWork in Playa Vista and other locations\n* Free Thrive Market membership with exclusive employee discount\n* Coverage for Life Coaching & Therapy Sessions on our holistic mental health and well-being platform\n\n\nWe're a community of more than 1 Million + members who are united by a singular belief: It should be easy to find better products, support better brands, make better choices, and build a better world in the process.\nAt Thrive Market, we believe in building a diverse, inclusive, and authentic culture. If you are excited about this role along with our mission and values, we encourage you to apply.\nThrive Market is an EEO/Veterans/Disabled/LGBTQ employer\nAt Thrive Market, our goal is to be a diverse and inclusive workplace that is representative, at all job levels, of the members we serve and the communities we operate in. Weโre proud to be an inclusive company and an Equal Opportunity Employer and we prohibit discrimination and harassment of any kind. We believe that diversity and inclusion among our teammates are critical to our success as a company, and we seek to recruit, develop, and retain the most talented people from a diverse candidate pool. If youโre thinking about joining our team, we expect that you would agree!\nIf you need assistance or accommodation due to a disability, please email us at [email protected] and weโll be happy to assist you.\nEnsure your Thrive Market job offer is legitimate and don't fall victim to fraud. Thrive Market never seeks payment from job applicants. Thrive Market recruiters will only reach out to applicants from an @thrivemarket.com email address. For added security, where possible, apply through our company website at www.thrivemarket.com.\nยฉ Thrive Market 2024 All rights reserved.\n\nJOB INFORMATION:\n* Compensation Description - The base salary range for this position is $175,000 - $225,000/Per Year. \n* Compensation may vary outside of this range depending on several factors, including a candidateโs qualifications, skills, competencies and experience, and geographic location. \n* Total Compensation includes Base Salary, Stock Options, Health & Wellness Benefits, Flexible PTO, and more! \n\n\n\n\n\n#LI-DR1 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, DevOps and Cloud jobs that are similar:\n\n
$60,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLos Angeles or Remote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for TheyDo and want to re-open this job? Use the edit link in the email when you posted the job!
# About you\nWeโre hiring a Senior Backend Engineer to join our founding team. You will be working closely with our CTO, Charles, to shape both the team and platform while we get ready for scale.\n\nYou like to get things right, a **pragmatic perfectionist** who will continuously shape our application architecture and make it ready to scale. You understand the right balance between code readability, simplicity, development speed, performance, and maintainability.\n\nYou're well-acquainted with typed NodeJS codebases and preferably these technologies: GraphQL, Apollo, Postgres, Redis, ElasticSearch, Docker, AWS, websockets, microservices, event-driven architecture.\n\n# About TheyDo\nTheyDo is the first B2B SaaS platform that allows organizations to redefine cross-team collaboration around the customer journey. It is journey management, the product management way. We help teams make sense of a complex data graph and connect it with various data sources. Our users are design-savvy and we strive to make a highly polished and performant experience for them.\n\nWe're a passionate team from ๐ณ๐ฑ๐ต๐ฑ๐บ๐ฆ๐ธ๐ช. Founded in 2019, TheyDo has raised $2M+ from top investors to start a movement. We are about to double our team and get our product ready for scale while we are onboarding customers across all continents.\n\nWe're on a mission to help organisations scale Journey Management. Today, everyone is in the Experience business; here, we help our customers to make better and faster customer-centric decisions across the entire customer experience. Thanks to TheyDo, everyone agrees, including the customer.\n\nRead more on our [website](https://www.theydo.io).\n\n# Your assignment\nYour top priority is shaping the architecture of our product and getting it ready for scale. You'll work on the technically ambitious projects we have planned. Some examples currently on our roadmap:\n\n* ๐ Realizing integrations with a wide ecosystem - Miro, Jira, Google Analytics, etc.\n* ๐ Implementing microservices and extend our event-driven architecture.\n* โฑ๏ธ Enabling version control on all user data.\n* โก Improving real-time collaborative functionality. Using fractional indexing, last-writer wins and other techniques to provide a superior user experience.\n\nAs a founding team member, you will get a chance to set the foundations of our engineering culture. You will help articulate our engineering principles and help set the long-term roadmap.\n\n# We're looking for\n* An ambitious engineer with several years of experience working on back-end architecture and design. Previous experience at a scaled product is a big plus.\n* An engineer who wants to be at the foundation of a fast-growing team.\n* A product-minded engineer that wants to understand how people use our product and why.\n* An asynchronous worker who organises and documents their work.\n* A clean coder who writes well-structured and maintainable code.\n\n# What we offer\n* Remote position, for 4-5 days per week, across flexible working hours.\n* Collaborate with zealous colleagues having 20+ years of experience working in the field.\n* A unique opportunity to shape a product and our growing team.\n* Regular off-sites/company outings with the TheyDo team.\n* Competitive compensation and equity package.\n* As many vacation days as you need, we expect you to take at least 25.\n* Professional development reimbursement.\n* Mental health and wellness reimbursement.\n* Paid parental leave.\n* Home office & technology reimbursement.\n\nTo summarise, we value work-life harmony backed by personal freedom under responsibility. Sounds like fun? We're looking forward to having you join our team. โจ๐\n\n# Our engineering team\nThe engineering team consist of: a CTO, three full-stack engineers, one back-end engineer, and one QA tester. We aim for a relaxed environment within the ambitious goals we have for our product.\n\nOur server is fully typed and built using NodeJS, Apollo, Redis, Postgres, ElasticSearch and more modern technologies. Our web application is also typed and uses VueJS, Apollo, WebSockets, and more. Other tooling currently includes AWS, Storybook, Cypress, Jest, Stripe, and WorkOS.\n\nA typical day at the office for an engineer includes; flexibility to organise your own time, no set hours, ample time for deep work, as few mandatory meetings as possible, plenty of pair programming with team members to get your code just right, reviewing pull requests, and running around in our virtual office.\n\nView our team members [here](https://www.theydo.io/about-us).\n\n# Our culture\nTheyDo's culture is 'Do' rather than 'Talk'. Better ask for forgiveness instead of permission, no one will be accused of trying. We try to keep things simple because complexity slows us down.\n\nIt's not about the time spent, but the outcome achieved. It's up to everyone to map, plan and interact in the best way to get the most out of their day, week, and sprint. Always with an open mindset because we never know when and where the next great idea will surface.\n\nBeing remote we nourish and cherish connectivity, so no one feels alone or left out. We don't have long lines of communication or decisions because hierarchies and silos are part of the past and we love to shape the future. In our virtual office, you can just walk up to your team to have a quick chat, get work done or simply say hello. We motivate everyone to find their own work/life balance. Whether you choose to work asynchronously or synchronously it's up to you as long it fits you and your team.\n\nTheyDo is an equal employer treating everyone as equals. We value diversity and individuality. We think long term and strive to hire the best match for each role, no matter your background. \n\nPlease mention the words **SHIELD TOPIC SLIDE** when applying to show you read the job post completely (#RMTguMjIyLjEwNi45Mw==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$80,000 — $130,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
\n\n#Location\nEurope
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Shopify and want to re-open this job? Use the edit link in the email when you posted the job!
**Company Description**\n\nShopify is the leading omni-channel commerce platform. Merchants use Shopify to design, set up, and manage their stores across multiple sales channels, including mobile, web, social media, marketplaces, brick-and-mortar locations, and pop-up shops. The platform also provides merchants with a powerful back-office and a single view of their business, from payments to shipping. The Shopify platform was engineered for reliability and scale, making enterprise-level technology available to businesses of all sizes. Headquartered in Ottawa, Canada, Shopify currently powers over 1,000,000 businesses in approximately 175 countries and is trusted by brands such as Allbirds, Gymshark, PepsiCo, Staples, and many more.\n\nAre you looking for an opportunity to work on planet-scale infrastructure? Do you want your work to impact thousands of developers and millions of customers? Do you enjoy tackling complex problems, and learning through experimentation? Shopify has all this and more.\n\nThe infrastructure teams build and maintain Shopifyโs critical infrastructure through software and systems engineering. We make sure Shopifyโthe worldโs fastest growing commerce platformโstays reliable, performant, and scalable for our 2000+ member development team to build on, and our 1.7 million merchants to depend on.\n\n**Job Description**\n\nOur team covers the disciplines of site reliability engineering and infrastructure engineering, all to ensure Shopifyโs infrastructure is able to scale massively while staying resilient.\n\nOn our team, youโll get to work autonomously on engaging projects in an area youโre passionate about. Not sure what interests you most? Here are some of the things you could work on:\n\n* Build on top of one of the largest Kubernetes deployments in Google Cloud (we are operating a fleet of over 50+ clusters)\n* Collaborate with other Shopify developers to understand their needs and ensure our team works on the right things\n* Maintain Shopifyโs Heroku-style self-service PaaS for our developers to consolidate over 400 production services\n* Help build our own Database as a Service layers, which include features such as transparent load balancing proxies and automatic failovers, using the current best-of-breed technologies in the area\n* Help develop our caching infrastructure and advise Shopify developers on effective use of the caching layers\n* Build tooling that delights Shopify developers and allows them to make an impact quickly\n* Work as part of the engineering team to build and scale distributed, multi-region systems\n* Investigate and resolve production issues\n* Build languages, frameworks and libraries to support our systems\n* Build Shopifyโs predictable, scalable, and high performing full text search infrastructure\n* Build and support infrastructure and tooling to protect our platform from bots and DDoS attacks\n* Autoscale compute up and down based on the demands of the platform, and further protect the platform by shedding lower priority requests as the load gets high\n* And plenty more!\n\n**We also understand the importance of sharing our work back to the developer community:**\n\n* Ghostferry: an open source cross cloud, multipurpose database migration tool and library\n* Services DB: A platform to manage services across various runtime environments\n* Shipit: Our open-source deployment tool\n* Capturing Every Change From Shopifyโs Sharded Monolith\n* Read consistency with database replicas\n\n**Qualifications**\nSome of the technology that the team uses: Ruby, Rails, Go, Kubernetes, MySQL, Redis, Memcached, Docker, CI Pipelines, Kafa, ElasticSearch, Google Cloud.\n\nIs some of this tech new to you? Thatโs OK! We know not everyone will come in fully familiar with this stack, and we provide support to learn on the job.\n\n**Additional information**\n\nOur teams are distributed remotely across North America, and European timezones.\n\nWe know that applying to a new role takes a lot of work and we truly value your time. Weโre looking forward to reading your application.\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous peoples, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.\n\n \n\nPlease mention the words **SALAD PALM DOLL** when applying to show you read the job post completely (#RMTguMjIyLjEwNi45Mw==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nUnited States, Canada
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMTguMjIyLjEwNi45Mw==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
We are looking for a Lead DevOps engineer to join our team at Prominent Edge. We are a small, stable, growing company that believes in doing things right. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want engineers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Many of our projects are web applications which often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com/ for more information and apply through https://prominentedge.com/careers.\n\nRequired skills:\n* Experience as a Lead Engineer.\n* Minimum of 8 years of total experience to include a minimum of 1 years of web or software development experience.\n* Experience automating the provisioning of environments by designing, implementing, and managing configuration and deployment infrastructure as code solutions.\n* Experience delivering scalable solutions utilizing Amazon Web Services: EC2, S3, RDS, Lambda, API Gateway, Message Queues, and CloudFormation Templates.\n* Experience with deploying and administering kubernetes on AWS or GCP or Azure.\n* Capable of designing secure and scalable solutions.\n* Strong nix administration skills.\n* Development in a Linux environment using Bash, Powershell, Python, JS, Go, or Groovy\n* Experience automating and streamlining build, test, and deployment phases for continuous integration\n* Experience with automated deployment technologies such as Ansible, Puppet, or Chef\n* Experience administering automated build environments such as Jenkins and Hudson\n* Experience configuring and deploying logging and monitoring services - fluentd, logstash, GeoHashes, etc.\n* Experience with Git/GitHub/GitLab.\n* Experience with DockerHub or a container registry.\n* Experience with building and deploying containers to a production environment.\n* Strong knowledge of security and recovery from a DevOps perspective.\n\nBonus skills:\n* Experience with RabbitMQ and administration.\n* Experience with kops.\n* Experience with HashiCorp Vault, administration, and Goldfish; frontend Vault UI.\n* Experience with helm for deployment to kubernetes.\n* Experience with CloudWatch.\n* Experience with Ansible and/or a configuration management language.\n* Experience with Ansible Tower; not necessary.\n* Experience with VPNs; OpenVPN preferable.\n* Experience with network administration and understanding network topology and architecture.\n* Experience with AWS spot instances or Google preemptible.\n* Experience with Grafana administration, SSO (okta or jumpcloud preferable), LDAP / Active Directory administration, CloudHealth or cloud cost optimization.\n* Experience with kubernetes-based software - example - heptio/ark, ingress-nginx, anchore engine.\n* Familiarity with the ELK Stack\n* Familiarity with basic administrative tasks and building artifacts on Windows\n* Familiarity with other cloud infrastructures such as Cloud Foundry\n* Strong web or software engineering experience\n* Familiarity with security clearances in case you contribute to our non-commercial projects.\n\nW2 Benefits:\n* Not only you get to join our team of awesome playful ninjas, we also have great benefits:\n* Six weeks paid time off per year (PTO+Holidays).\n* Six percent 401k matching, vested immediately.\n* Free PPO/POS healthcare for the entire family.\n* We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.\n* Want to take time off without using vacation time? Shuffle your hours around in any pay period.\n* Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, weโll buy you the new version whenever you want.\n* Want some training or to travel to a conference that is relevant to your job? We offer that too!\n* This organization participates in E-Verify.\n\nAbout You:\n* You believe in and practice Agile/DevOps.\n* You are organized and eager to accept responsibility.\n* You want a seat at the table at the inception of new efforts; you do not want things "thrown over the wall" to you.\n* You are an active listener, empathetic and willing to understand and internalize the unique needs and concerns of each individual client.\n* You adjust your speaking style for your audience and can interact successfully with both technical and non-technical clients.\n* You are detail-oriented but never lose sight of the Big Picture.\n* You can work equally well individually or as part of a team.\n* U.S. citizenship required\n\n\n \n\nPlease mention the words **RIPPLE DESK VERSION** when applying to show you read the job post completely (#RMTguMjIyLjEwNi45Mw==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nUnited States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.