This job post is closed and the position is probably filled. Please do not apply. Work for pganalyze and want to re-open this job? Use the edit link in the email when you posted the job!
At pganalyze, we redefine the user experience for optimizing the performance of Postgres databases. Our product helps customers such as Atlassian, Robinhood and DoorDash to understand complex Postgres problems and performance issues.\n\nApplication developers use pganalyze to get deep insights into complex database behaviors. Our product is heavy on automated analysis and custom visualizations, and makes automatic recommendations, such as suggesting the best index to create for a slow query.\n\nYou will enjoy working at pganalyze if you are a software craftsperson at heart, who cares about writing tools for developers. You will take new features from idea to production deployment end-to-end within days. Your work will regularly involve writing or contributing to open-source components as well as the Postgres project.\n\nWe are a fully remote company, with the core team based in the San Francisco Bay Area. Our company is bootstrapped and profitable. We emphasize autonomy and focus time by having few meetings per week.\n\n### About the role\n\nYour core responsibility: To develop and optimize our Postgres statistics and analysis pipeline, end-to-end, and work on the processes that generate automated insights from the complex data set. This work involves having a detailed understanding of the core data points that are collected from the source Postgres database as a timeseries, optimizing how they get retrieved, transported to the pganalyze servers, and then processed and analyzed.\n\nToday, this data pipeline is a combination of open-source Go code (in the [pganalyze collector](https://github.com/pganalyze/collector)), and statistics processing written in Ruby. You will be responsible for improving this pipeline, introducing new technologies, including a potential rewrite of the statistics processing in Rust.\n\nSome of the work will lead into the depths of Postgres code, and you might need to compile some C code, or understand how the pganalyze parser library, [pg_query](https://pganalyze.com/blog/pg-query-2-0-postgres-query-parser), works in detail.\n\nYour work is the foundation of the next generation of pganalyze, with a focus on the automatic insights we can derive from the workload of the monitored Postgres databases, and giving fine-tuned recommendations such as which indexes to create, or which config settings to tune.\n\n#### At pganalyze, you will:\n\n* Collaborate with other engineers on shipping new functionality end-to-end, and ensure features are performant and well implemented\n* Be the core engineer for the foundational components of pganalyze, such as the statistics pipeline that processes all data coming into the product\n* Develop new functionality that monitors additional Postgres statistics, or derives new insights from the existing time series information\n* Write Ruby, Go or Rust code on the pganalyze backend and the pganalyze collector\n* Evaluate and introduce new technologies, such as whether we should utilize Rust in more places of the product\n* Optimize the performance of pganalyze components, using language-specific profilers, or Linux tools like โperfโ\n* Scale out our backend, which relies heavily on Postgres itself for statistics storage\n* Contribute to our existing open-source projects, such as pg_query, or create new open-source projects in the Postgres space\n* Work with upstream communities, such as the Postgres project, and contribute code back\n\n#### Previously, you have:\n\n* Worked professionally for at least 5 years as a software engineer\n* Written complex, data heavy backend code with Rust, Go, Ruby or Python\n* Used Postgres for multiple projects, are comfortable writing SQL, and are familiar with โEXPLAINโ\n* Created indexes on a Postgres database based on a query being slow\n* Looked at the source for a complex open-source project to chase a hard to understand bug\n* Written code that fetches data and/or interacts with cloud provider APIs\n* Structured your work and set your schedule to optimize for your own productivity\n\n#### Optionally, you may also have:\n\n* Written low-level C code, for fun\n* Used Protocol Buffers, FlatBuffers, msgpack or Capโn Proto to build your own APIs\n* Analyzed patterns in time series data and run statistical analysis on the data\n* Experimented with ML frameworks to analyze complex data sets\n* Optimized a data-heavy application built on Postgres\n* Written your own Postgres extensions\n* Used APM and tracing tools to understand slow requests end-to-end\n\n#### You could also be familiar with:\n\n* Building your own Linux system from scratch\n* The many [regards](https://twitter.com/regardstomlane) of Tom Lane on the Postgres mailing list\n* Reproducible builds, and why it would be really nice to have them, like yesterday \n\nPlease mention the words **TREAT PHOTO TOAST** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMjU=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$140,000 — $180,000/year\n
\n\n#Location\nUnited States / Canada
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Salesforce and want to re-open this job? Use the edit link in the email when you posted the job!
\nRuntime Sr., Lead, or Principal Level Platform Engineer, Heroku\nHeroku operates the world’s largest PaaS cloud, continuously delivering millions of apps with 6+ million container deployments, 16+ billion routing requests, and 10+ terabytes of application logs per day. Our vision is for developers to focus on their applications and leave operations to us.\nWe work in small groups who are heartfelt about our users’ problems. We plan weekly, chat daily and work closely together. Our team is a remote community with members excited to work together on challenging distributed systems problems. Equality is a core value for Salesforce; it's at the heart of everything we do and strive to be. That means Equal Opportunity, Equal Advancement and Equal Pay for all. We do not discriminate on the basis of race, religion, color, national origin, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status.\nWe hope you are passionate about joining our community of engineers who love to learn, work, and operate a gigantic distributed system, build and sustain a remote culture and help grow and mentor other engineers.\nExamples of recent work Runtime engineers have done at Heroku\nCore infrastructure scaling and growth: broke up a critical, monolithic Ruby application that performs many container scheduling tasks and refactored it as a set of well scoped gRPC Go services.\nSupporting critical customer applications: diagnosed and fixed a very elusive bug in how signals are forwarded between our platform logging process and customer containers that was causing customer apps to crash unexpectedly.\nDelivering features to customers: built an automated cert management capability using the Let's Encrypt API to provision free customer SSL certs for domains added to apps and automatically renew expiring certs.\nResearching and learning: prototyped a Kubernetes orchestration backend for our internal Runtime API as part of a larger effort to learn about and adopt new technologies in our runtime.\nInfrastructure improvements: transitioned our use of K8s from self-managed to managed by integrating EKS and implemented a new authentication scheme to integrate our container registry with EKS.\nIncident response: conducted emergency response when a remote operation timed out during routine API maintenance in our EU runtime, corrupting routing state data for a single node. Incident responders followed documented procedure in our ops playbooks to identify the corrupted node and flush its cache. Remediation work included refining our metrics to reduce our time to diagnose and improving our automated tooling used for system maintenance.\n\nProfiles relevant specifically to the Senior Engineer role would feature:\n3+ years in a full-time, professional software engineering role\nExperience developing production software in Go or Ruby\nExperience developing on IaaS (AWS, GCP, Azure, OpenStack, etc)\nDemonstration of strong software development best practices, such as documentation driven design, code review, test coverage, continuous integration, continuous delivery, phased rollouts\nEnthusiasm for learning new languages, frameworks, and skills\nExcellent written and verbal communication skills, including the ability to work effectively with geographically distributed teams and people of various backgrounds\nExperience participating in an on-call rotation\n\nProfiles relevant specifically to Lead and Principal roles would additionally feature:\n5+ years in a full-time, professional software engineering role\nExperience in a technical leadership role in a collaborative team environment\nExperience deploying, operating and supporting critical production systems\nExperience deploying services on Kubernetes\nExperience participating in an on-call rotation\n\n\nSalesforce, the Customer Success Platform and world's #1 CRM, empowers companies to connect with their customers in a whole new way. The company was founded on three disruptive ideas: a new technology model in cloud computing, a pay-as-you-go business model, and a new integrated corporate philanthropy model. These founding principles have taken our company to great heights, including being named one of Forbes’s “World’s Most Innovative Company” six years in a row and one of Fortune’s “100 Best Companies to Work For” nine years in a row. We are the fastest growing of the top 10 enterprise software companies, and this level of growth equals incredible opportunities to grow a career at Salesforce. Together, with our whole Ohana (Hawaiian for "family") made up of our employees, customers, partners and communities, we are working to improve the state of the world. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Heroku, Engineer, Executive, Cloud, Scheme, Salesforce, Node, Ruby, API, Senior and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.