This job post is closed and the position is probably filled. Please do not apply. Work for Fleet and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nLet's start with why we exist.\n\nFleet builds open source software to manage and secure computing infrastructure: employee laptops, cloud servers, and more. Our technology helps IT and security teams build trust within their organization, while getting their jobs done more effectively.\n\nFleet is an all-remote company with experienced founders, including two creators of popular open source projects and a compelling lead investor. Our business model is inspired by the success of GitLab and Elastic, and we have incredible early customers ranging from startups to Fortune 500 companies with hundreds of thousands of endpoints.\n\n\nWhat happens when you join us?\n\n\n* As the first senior engineering hire, this position offers huge potential for growth.\n\n* You will write significant open source code, merging commits in your first days at the company.\n\n* You will work closely with the CTO and CEO to define technical and product vision.\n\n* Over time, you will establish yourself as a leader in Fleet's growing team and user community, whether through management or expert-level individual contributions.\n\n\n\n\n\nWhy should you join us?\n\n\n* Work from anywhere with good internet. (We're 100% remote. No office. No commute.)\n\n* Help make endpoint monitoring less intrusive and more transparent.\n\n* Safeguard the production servers and employee laptops of Earth's largest companies.\n\n* Build greenfield features and make key technical decisions that go live in days.\n\n* Most (if not all) of the code you write is public and highly visible at github.com/fleetdm/fleet.\n\n\n\n\n\nAre you our new teammate?\n\n\n* You are competent with source control in Git. You have great written communication skills.\n\n* You can mentor other developers and do code reviews. Maybe you managed open source projects before; maybe you collaborated closely with more junior engineers at work.\n\n* You look forward to working with designers to improve the user experience of stuff you work on.\n\n* You bring senior talent to our team and open source community, with 4+ years of equivalent experience in one or more of the Engineering Foundations below (and interest in digging into the others).\n\n* Nice to have: Experience working on an all-remote, distributed team.\n\n* Nice to have: Experience working in IT operations and/or cybersecurity.\n\n* Nice to have: Experience working with Mobile Device Management (MDM) APIs.\n\n* Nice to have: Experience deploying/monitoring/managing containers with Docker/K8s.\n\n\n\n\n\n\nEngineering Foundations\n\nAn ideal senior candidate has 4+ years equivalent experience in one or more of the following three engineering foundations (and interest in digging into the others):\n\nI. Frontend\n\nFleet’s frontend is a single page application (SPA) written in JavaScript with React and Redux. We strive for “convention over configuration”, offering a user experience that helps security and IT staff enjoy their jobs. There are many interesting challenges in helping our users understand the data collected from their laptops and servers.\n\n\n* Experience building and architecting SPAs in JavaScript/Typescript (2+ years of equivalent experience with React specifically.)\n\n* Expert CSS skills (we use Sass)\n\n* Ability to recommend and implement frontend testing patterns (E2E tests, etc.)\n\n* Nice to have: Experience developing responsive applications.\n\n* Nice to have: Familiarity with frontend performance profiling and optimization.\n\n* Nice to have: Experience building data visualizations (graphs, charts, etc.)\n\n* Nice to have: Experience with React Native, Electron.js, or similar.\n\n\n\n\nII. Backend\n\nFleet’s server is written in Go with go-kit. Deployments range from single servers to over 100,000 clients connected to horizontally scaled Fleet servers, handling tens of thousands of requests per minute. We aim to keep Fleet’s deployment as simple as possible to ease self-hosted deployment. MySQL and Redis are used for persistence and caching.\n\n\n* Experience building scalable, production quality servers.\n\n* Ability to recommend and implement backend testing patterns (E2E tests, etc.)\n\n* Familiarity with server and SQL performance profiling and optimization.\n\n* Familiarity with database migration strategies.\n\n* Nice to have: Experience programming with Go and go-kit.\n\n* Nice to have: Experience with Redis and/or MySQL.\n\n* Nice to have: Experience deploying and operating hosted SaaS services.\n\n* Nice to have: Experience working with Mobile Device Management (MDM) APIs.\n\n* Nice to have: Experience deploying/monitoring/managing containers with Docker/K8s\n\n\n\n\nIII. Endpoints\n\nFleet builds on top of the osquery agent (osquery.io), a Linux Foundation OSS project. Our CTO co-created osquery and serves on the Technical Steering Committee. On the endpoint we are building Orbit, a wrapper for osquery that will also become our platform for deploying additional open-source software such as Fleet Desktop (an interface for device users to interact with Fleet).\n\n\n* Experience developing applications on macOS, Linux, and/or Windows.\n\n* Familiarity with packaging tools: macOS .pkg, Linux .deb, Linux .rpm, Windows .msi, etc.\n\n* Familiarity with service persistence: macOS launchd, Linux systemd, Windows Services, etc.\n\n* Experience with building cross-platform user interfaces.\n\n* Experience managing and debugging performance of software installed on endpoints.\n\n* Readiness to write code that will directly impact performance for hundreds of thousands of people.\n\n* Nice to have: Go (for Orbit) and C++ (for osquery) programming experience.\n\n* Nice to have: Experience building and securing update systems for endpoint software.\n\n* Nice to have: Experience with low-level system APIs in macOS, Linux, and/or Windows.\n\n\n\n\nPlease include a few sentences about your experience with the Engineering Foundations in the "Experience" box of the application. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Developer, Digital Nomad, React, JavaScript, Cloud, CSS, Mobile, Senior, Junior, SaaS, Linux and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMTguOTcuMTQuOTE=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Cardinal Financial Company and want to re-open this job? Use the edit link in the email when you posted the job!
\nWho We Are:\n\nCardinal Financial is a nationwide direct mortgage lender whose mission is to prove that homeownership is possible for everyone. By bringing an open-minded approach to an often closed-minded industry, we're able to embrace every unique financial situation differently in order to craft the best possible loans for our borrowers. We pride ourselves on providing excellent service backed by our groundbreaking technology, and these two components of our process come together to complete a simple, personalized mortgage experience. But it all starts with our people.\n\nWe believe that no matter where you fit in our organization—Sales, Human Resources, Information Technology, or even re-stocking the break rooms with endless coffee supplies—everyone can influence the experience that we provide to our customers and our partners. We tell our customers and our partners that anything can be reimagined. So why not your career? Looking to join a company that values its people, innovates and expands on its proprietary technology, and is growing at a ridiculous rate?! Apply below!\n\nWho We Need:\n\nWe are looking for a DevOps Engineer to join the team managing the infrastructure for a national mortgage lender's technology platform. You will design, build, and support the infrastructure using modern tools like Terraform, Kubernetes (K8S) and GitLab in a Multi-Cloud environment. You will work with the Software Engineering, Production Systems, and Business Intelligence teams in a highly collaborative organization.\n\nWhat You Will Do:\n\n\n* Design, implement and maintain Infrastructure as Code (laC)\n\n* Improve code deployment and unit testing frameworks\n\n* Improve monitoring of infrastructure and applications.\n\n* Maintain and improve our security posture, ensure best practices are adhered to in new projects\n\n* Develop and maintain extract, transform and load mechanisms for data analysis\n\n* Manage your tasks and their priorities with feedback and review from a supportive team\n\n* Investigate new technologies and deploy them in support of the team\n\n\n\n\nWhat You Need:\n\n\n* At least 1 year managing cloud provider resources in AWS, Azure or GCP\n\n* Experience writing and maintaining complex Docker files\n\n* Experience writing CI / CD (Continuous Integration / Continuous Deployment) pipelines using tools such as GitLab, Jenkins\n\n* Implemented network, server, and application-status monitoring tools\n\n* 3+ years Linux / Unix experience\n\n* 1+ year experience with git\n\n* Experience with basic database administration and SQL\n\n* 1+ years networking and security experience a plus\n\n* Use of infrastructure as code tools such as Terraform or CloudFormation a plus\n\n* Use of Server provisioning software such as Ansible, Puppet or Chef a plus\n\n* Experience in container orchestration using tools like Kubernetes or Docker Swarm a plus\n\n* Knowledge of Python and/or Java a plus\n\n\n\n\nWhat We Offer:\n\n\n* Strength, Stability, and Vision\n\n* Highly engineered proprietary technology that is revolutionizing the mortgage industry\n\n* An empowered culture where your ideas are important and your voice matters\n\n* Opportunity for career growth\n\n* Benefits that become effective the first day of the month following your start date including - Medical, Dental, Vision, and much more\n\n* 401K w/ 50% match up to a maximum employee contribution of 5% - Effective the 1st of the month following 30-days of employment\n\n\n\n\nOur Technology:\n\nOur SaaS enterprise mortgage lending platform is a challenging and complex system that includes lender and borrower interfaces, workflow, document management, advanced automation, and integrations with external entities and services.\n\nThe server architecture is stateless, cleanly managing the business logic and persistence layer, exposed as a RESTful JSON API. The server is written using a combination of Java 8 on Jetty, and Node.js for asynchronous tasks. We persist our data in MySQL using MyBatis and use Redis for caching, metrics, and non-critical message queueing.\n\nThe UI uses a custom, JavaScript MVC framework with many modern techniques: dynamic code loading modules, client-side routing and templates, powerful data-binding features, integrated services, and advanced component architecture.\n\nWe develop on Macs and deploy on AWS. Our tools include: github, Jenkins, gradle, grunt, JAXB, iText, Aspose, IntelliJ IDEA, Pivotal Tracker. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Engineer, JavaScript, Finance, Java, Cloud, Python, SaaS and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.