\nAbout Baubap\nWe are a fast-growing, Mexican fintech startup with the mission to become the bridge to peopleโs financial freedom through technology.\nWe are providing microloans to people in financial need through a fast and efficient process, always treating them with the respect and dignity they deserve.\nOur long-term vision is be the most inclusive digital bank in LATAM with more than 2.5 million clients.\n\n\n*We require that the candidate is fluent in Spanish and currently resides in the LATAM region, as it's important be willing to work under the Mexican Central Time Zone.\n\n About you\n\n๐ \n\nAre you ready to shape the future of personal micro loans? As a Backend Engineer focused on Financial Core Systems, you'll be at the heart of ensuring our backend's performance, stability, and reliability. Your work will directly streamline our operations and accelerate product development.\n\nYouโll collaborate with a dynamic and passionate team dedicated to problem-solving and continuous improvement. Your contributions will be instrumental in achieving our product goals, to ensure our customers experience the best feeling by using our products.\n\nYouโll take charge of important projects focused on improving functionality, helping us achieve our key objectives and results (OKRs). Your work will enhance user experiences and streamline processes through clear communication and quick updates. Join us and make a real impact in the micro-loan industry!\n\n\nAs Backend Engineer, these are the challenges that you will help us for solving\n\n\n* Work with the team to design and implement backend system architecture, focusing on scalability, maintainability, and efficiency.\n\n* Improve backend performance through code optimization and architectural enhancements.\n\n* Integrate external Financial and Compliance systems and services, including third-party products in a variety of APIs implementations.\n\n* Develop and maintain reliable and efficient APIs for server-to-client, server-to-server and event-driven communication.\n\n* Implement security best practices to protect against vulnerabilities and cyber threats.\n\n* Diagnose and resolve complex issues related to high-volume transactions in the product's backend.\n\n\n\n\n Day to day\n\n\n* Collaborate to design and implement scalable, maintainable, and efficient backend system architectures using micro-services.\n\n* Develop and maintain robust and efficient APIs for server-to-client, server-to-server and event-driven communication, ensuring performance under high traffic.\n\n* Optimize backend components to ensure optimal performance, scalability, and reliability. Conduct regular performance tuning and bug fixing.\n\n* Implement security best practices to protect against vulnerabilities and cyber threats.\n\n* Conduct code reviews and provide constructive feedback to maintain high code quality.\n\n* Ensure seamless communication and data exchange between different components.\n\n* Participate in regular team meetings, brainstorming sessions, and collaborative planning.\n\n* Stay updated with industry trends, emerging technologies, and best practices, to continuously improve development processes, tools, and methodologies to enhance productivity and product quality.\n\n\n\n\nRequirements \n\n\n* 5+ years of experience in backend development in a fast growing product.\n\n* Hands-on experience designing and building micro services architectures.\n\n* Proven ability to develop and maintain RESTful and/or GraphQL APIs.\n\n* Demonstrated skills in optimizing system performance, scalability, and reliability.\n\n* Excellent teamwork and communication skills, with the ability to work effectively with cross-functional teams.\n\n* Commitment to writing clean, maintainable, and efficient code, following industry standards and best practices.\n\n* Experience with monitoring, maintaining, and supporting backend systems in a production environment.\n\n* Ability to create and maintain detailed technical documentation for system architecture, design, and processes.\n\n* Experience with writing and maintaining unit, integration, and end-to-end tests to ensure system reliability.\n\n* Experience with designing and maintaining scalable data models in relational databases such as PostgreSQL and MySQL which the capability to ensure data integrity and precision.\n\n* \n\n\n\n\n๐ Nice to have\n\n\n* Experience with tech leading small groups of at most 6 people, giving them technical direction and support.\n\n* Expertise in putting together well designed solutions to support constant-growing financial platforms by implementing cutting-edge technology and patterns.\n\n* In-depth knowledge and experience working in the fintech industry.\n\n* Familiarity with STP (Sistema de Transferencias y Pagos) and its operations.\n\n* Understanding and experience with SPEI (Sistema de Pagos Electrรณnicos Interbancarios) and its implementation.\n\n* Expertise in disbursement processes, payment gateways, and financial transaction handling.\n\n* Knowledge of various financial products, particularly personal micro loans and their lifecycle.\n\n* Understanding of regulatory compliance requirements in the Mexican fintech sector.\n\n* Knowledge of risk assessment and management in financial services.\n\n* Experience applying machine learning techniques to financial data for fraud detection, credit scoring, etc.\n\n* Proficiency with Docker and Kubernetes for managing containerized applications.\n\n* Skills in data analysis and using tools like SQL, Python (Pandas), or R to extract insights from financial data.\n\n* \n\n\n\n\nWhat is our way of working?\n\n\n* We aim to be as product centric as possible, which means we always prioritise:\n\n* Listening to our customers (whether internal or external), mainly qualitatively and secondary quantitatively\n\n* Focusing on real problems our clients face\n\n* Strong focus on customer experience\n\n* Assuring that every product adds value to both, our business and our customers\n\n* Falling in love with the problem instead of the solution\n\n* Quick validation and learning\n\n* Strong collaboration within your team and other teams\n\n* Small, progressive, incremental delivery, innovation comes from iterations not from scratch.\n\n\n\n\n\n What we offer\n\n\n\n* Being part of a multinational, highly driven team of professionals\n\n* Flexible and remote working environment\n\n* High level of ownership and independence\n\n* 20 vacation days / year + 75% holiday bonus\n\n* 1 month (proportional) of Christmas bonus\n\n* Vales de despensa of 3,257 MXN / month\n\n* Health & Life insurance\n\n* Home office set-up budget\n\n* Unlimited budget for Kindle books\n\n* 2 psychological sessions/month with Terapify\n\n* Baubap Free Loan\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, GraphQL, Python, Docker, Senior, Engineer and Backend jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMexico City, Mexico City, Mexico
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAbout Us\n\nWizard is revolutionizing the shopping experience using the power of generative AI and rich messaging technologies to build a personalized shopping assistant for every consumer. We scour the entire internet of products and ratings across brands and retailers to find the best products for every consumerโs personalized needs. Using an effortless text-based interface, Wizard AI is always just a text away. The future of shopping is here. Shop smarter with Wizard.\n\nThe Role\n\nWe seek a talented and dedicated Python Engineer to join our talented AI/ML team. In this role, you will be instrumental in developing and maintaining the core functionality of our applications and services, ensuring the highest quality and performance.\n\nKey Responsibilities:\n\n\n* You will be embedded on the AI/ML team where you can work on the next generation AI Conversational Commerce Platform\n\n* Design and implement scalable solutions for the entire Machine Learning lifecycle, from data preprocessing, data retrieval functions, platform integrations, to model drift monitoring and online learning. \n\n* Write clean, scalable, and maintainable code, adhering to best practices and coding standards\n\n* Perform code reviews, providing constructive feedback to peers to ensure code quality and consistency\n\n* Troubleshoot, debug, and resolve software defects and issues, identifying root causes and implementing effective solutions\n\n* Participate in the full software development life cycle, from ideation to deployment, including requirements analysis, design, coding, testing, and documentation\n\n* Support and maintain existing applications and services, implementing enhancements and optimizations as needed\n\n* Continuously research and stay up-to-date with the latest industry trends and emerging technologies, sharing knowledge with team members and suggesting ways to improve our products and processes\n\n* Contribute to the creation and maintenance of technical documentation, including API specifications, user guides, and internal documentation\n\n\n\n\nRequirements:\n\n\n* Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent experience\n\n* 5+ years of experience in software development, with a demonstrable focus on Python programming in a high-availability environment\n\n* Experience working with researchers or scientists in ML, NLP, AI\n\n* Experience developing products with AI frameworks and integrations\n\n* Expertise in Django, FastAPI, Flask, or other Python web frameworks at scale\n\n* Strong understanding of Object-Oriented Programming (OOP) principles and design patterns\n\n* Expertise in GraphQL and RESTful API design and implementation\n\n* Familiarity with relational and non-relational databases (e.g., MySQL, PostgreSQL, MongoDB)\n\n* Familiarity with at least one other common programming language such as TypeScript, JavaScript, Rust, Go, etc\n\n* Version control systems such as Git are second nature\n\n* Strong problem-solving skills and the ability to think critically and creatively\n\n* Experience using application monitoring tools to measure performance and system health\n\n* Excellent communication and collaboration skills, with the ability to work effectively within a team and across departments\n\n* A proactive, self-motivated, and results-driven approach, with a strong desire to learn and grow professionally\n\n* Excited about the future opportunities of building AI-enabled products and services\n\n\n\n\nNice-to-haves:\n\n\n* Experience with front-end development technologies (e.g., HTML, CSS, JavaScript, React, Angular)\n\n* Familiarity with cloud computing platforms (e.g., AWS, GCP, Azure)\n\n* Experience with containerization technologies, such as Docker and Kubernetes\n\n* Knowledge of Agile methodologies, such as Scrum and Kanban\n\n* Previous experience in a startup environment\n\n\n\n\nThe expected salary for this role is $185,000-$235,000 depending on skills and experiences. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Embedded, GraphQL, Python, Docker, Cloud, Git, API and Engineer jobs that are similar:\n\n
$65,000 — $125,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nNew York City, New York, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nDo you want to work for a mission-driven non-profit, writing software that will contribute to helping the livelihoods of millions of coffee farmers around the world? Enveritas is a 501(c)3 non-profit and Y Combinator-backed startup looking to hire for our Engineering & Data Group. You can learn more about this job and about our Backend and Data Engineering Team at https://www.enveritas.org/jobs/backend-software-eng/\n\nWe are looking for two backend software engineers with a focus on python and PostgreSQL to join us on a remote/global, full-time basis. Our Backend and Data Engineering Team is a four-person team (soon to be six!) and is part of our Engineering & Data Group โ a quirky, talented, and humble group of about twenty with diverse backgrounds ranging from journalism to academia to international industry.\n\nAbout Our Backend & Data Engineering Team\n\nThe Backend & Data Engineering Team builds software to collect, analyze, and report data about coffee farmersโ conditions and practices. This large-scale data-collection effort requires many moving parts to work together, and we use technology to support that effort at every step of the process โ from identifying coffee farms in satellite imagery, to coordinating survey edits across country teams, to detecting data anomalies in real-time that can be investigated while teams are still in the field. A core part of our work is in data aggregation and report generation, with insights ultimately being shared with roasters and other stakeholders on how to assist in improving the social, economic, and environmental conditions of smallholder farmers. \n\nWhile our tooling varies across internal products, our backend services primarily use a Python/PostgreSQL stack running on Linux to run our GraphQL APIs. We use git and Github for maintaining our code, CircleCI for CI/CD, and AWS for hosting our services and static resources, with containerization where appropriate for development and deployment. We've begun working with Terraform.\n\nWhat Youโll Be Doing\n\nYou will contribute to major feature planning and development, both independently and in collaboration with your teammates.\n\n\n* Implement new features on our core platforms, Jebena and Sini. Youโll participate in long-term planning and product roadmaps, collaborate with product managers on writing specs for the team to implement, and develop features from specs. You should be comfortable collaborating with non-Engineering teams to understand their feature needs. A lionโs share of your time will be spent working with Python and PostgreSQL to add features to our internal platforms.\n\n* Maintenance and enhancements of existing code. Youโll work with other engineers to triage and resolve incoming issues (we use Sentry). Our team also reserves Fridays for bug-fixing, resolving technical debt, and discovering/relieving pain-points for our users.\n\n* Manage AWS services. In tandem with our Head of IT, a part of this role includes helping manage our AWS account, including reviewing our CI/CD setup and proposing ways to further automate and secure our setup, including expanding our usage of Terraform.\n\n\n\n\nQualifications\n\nRead this first: research shows that people of different backgrounds read job postings differently. If you donโt think you meet all of the qualifications but do think youโd be a great match for us, please consider applying and sharing more in your cover letter. Weโd love to talk with you to see what skills you can bring to our team. This said, we are most likely to be interested in your candidacy if you can demonstrate the majority of the qualifications listed below:\n\n\n* A degree in computer science, or equivalent training in the principles of software engineering.\n\n* Strong grasp of design patterns for building software that is well-encapsulated, performant, and elegant.\n\n* Multiple years of professional experience as a backend engineer in more than one team environment, including both developing engineering specs and writing code in Python.\n\n* Extensive experience with Python and PostgreSQL, and creating well-designed data models.\n\n* Background developing applications that provide HTTP-based APIs.\n\n* Familiarity with docker containers, AWS services (EC2, RDS, CloudFront), and CI/CD setups.\n\n* Excellent communication and analytical skills.\n\n\n\n\nWho You Are\n\nOur team is fully distributed, so you should be comfortable with remote work. This role is a full-time individual contributor role. While you can be located anywhere that our EOR (Deel) supports, our core hours are 10am to 2pm Eastern Time, Monday through Friday, with team members choosing either an early start or later stop as suits them.\n\nYou should be inspired by our mission to improve the lives of smallholder coffee farmers, and have an interest in sustainability. You should have a deep empathy for users of our tools and understand the importance of supporting the work of other teams. Because operational and business needs can be ambiguous and change on a short time-scale, you should have a love for environments with uncertainty, and enjoy not only solving problems, but discovering and demystifying them.\n\nWe are a small team! You should be comfortable working both independently and as a thoughtful collaborator, sensitive to the legibility and maintainability of your code when in the hands of your teammates.\n\nAbout Working With Us & Compensation\n\nEnveritas has teams around the world: we are about 100 people spread over almost two dozen countries, and of all backgrounds, faiths, and identities. To learn more about working at Enveritas, see https://www.enveritas.org/jobs/\n\nFor a US-Based hire, base salary for this position will be between $130,000 and $150,000 annually (paid semi-monthly). This is a full-time exempt position. Full benefits include 401k with matching contributions, Medical/Dental/Vision, and Flexible Spending Account (FSA), 4 weeks vacation in addition to 13 standard holidays, and personal/sick time.\n\nFor a hire outside the US, our offer will be competitive; the specific benefits and compensation details will vary as required to account for your regionโs laws and requirements. Salary for this position will be paid in relevant local currency.\n\nFor all staff, we are able to offer:\n\n\n* Annual education budget for conferences, books, and other professional development opportunities.\n\n* Annual all-company retreat.\n\n* Field visits to our Country Ops teams in coffee-growing countries such as Colombia, Costa Rica, Ethiopia, and Indonesia.\n\n\n\n\nInterview Process\n\nWe are committed to fair and equitable hiring. To honor this commitment, we are being transparent about our interview process. We are interested in learning what working with you would be like and believe the below is the fairest method for us to see you at your best โ and for you to learn about us! If you feel that a different method would be better for us to learn what working together would be like, please tell us in your application. \n\nAfter your introductory interview, we expect your interview process to take three to four weeks (but will depend on scheduling), and consist of four conversations that total about five hours of time. You should plan to also spend about four hours in total preparing for interviews. See the hiring page at https://www.enveritas.org/jobs/backend-software-eng/ for details about each of these interviews, including links to our interview prompts as available.\n\n\n* Introductory Interview (30 minutes; Google Meet; audio-only)\n\n* First Technical Interview (60 minutes; Google Meet)\n\n* Second Technical Interview (60-90 minutes; Google Meet)\n\n* Manager Interview (45-60 minutes; Google Meet)\n\n\n\n\nHow to Apply\n\nPlease apply using our Greenhouse application form. Feel free to contact us at [email protected] should you have any questions about the position or the interview process. Questions about this opportunity or process will not reflect negatively on your application.\n\nWe care deeply about diversity. Our work is complex and nuanced, so the more diversity we have in the voices working on our problems, the larger of an impact our work can have for the world. Enveritas is an Equal Opportunity Employer โencouraging an inclusive and diverse workforce. We embrace and celebrate the unique experiences, perspectives, and cultural backgrounds that each individual brings to the workplace. We are dedicated to hiring employees who reflect the communities we serve and strongly encourage qualified candidates from all backgrounds to apply.โ\n\nA few notes about our communications: We are not able to reply to messages sent to staff outside of either our application process or our jobs email address, as this is unfair to other candidates. Also, Enveritas has been made aware of fake job postings by individuals pretending to hire persons seeking employment. These individuals are looking to collect personal information about you for fraudulent purposes. All legitimate Enveritas job openings are posted under https://enveritas.org/jobs/ and all recruiting emails from Enveritas team members will come from @enveritas.org. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, GraphQL, Python, Docker, Education, Git, Engineer, Linux and Backend jobs that are similar:\n\n
$70,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Commit and want to re-open this job? Use the edit link in the email when you posted the job!
*Hiring Senior Full-Stack Developers located in Canada*
No more technical interviews
Work with over 70 pre-vetted impactful startups in North America
Gain access to transparent salary bands AND mentorship to grow your careerย
Collaborate with like-minded tech junkiesย
Build new tools with JavaScript, React, Ruby and much more!
Commit is a VC-backed remote-first community and accelerator program for Canadian senior software engineers looking to join some of Silicon Valleyโs most innovative startups as one of the first engineers.
We work exclusively with startups who are financially stable, have great salaries, use an exciting tech stack, champion diversity and inclusion and care about supporting your growth. We provide all the info you need so you can make the right decision and continue to cultivate your craft.
Go through a brief 3-step interview process with Commit. Learn more here.
If youโre accepted into the program, you will be paid a full salary while we work together to match you to a startup.
Once matched, you work with a startup for 3 months. If youโre happy, you can stay with them; if not, weโll work together to find a better match.
Throughout this process, you gain access to our large network of software developers. Curious about what it's like to join a startup as the first engineer? Looking for the best course on Rust? Running into an issue with Terraform that you can't solve? Someone in the community has been in your shoes and can help. Weโre run by engineers for engineers.
Full time paid employment as an Engineering Partner
Base salary of $115K to $140K CAD depending on experience
Extended health and dental plan for you and for your family
The right equipment to do your best work
Access to your own career coach and a mentor for your job search
We provide 15 vacation days on top of statutory holidays while you're part of the Engineering Partner program. There is no limit on Sick Days or Personal Days
Invitation-only events with technical leaders. Weโve been lucky to have guests like Katie Wilde (VP Engineering @ Buffer), Armon Dadgar (CTO @ Hashicorp), Gokul Rajaram (board member at DoorDash, Coinbase, Pinterest and The Trade Desk) and many others join us for private learning sessions.
We are a fully distributed, remote-first community, launched in Vancouver, with posts in Toronto, San Francisco, Mexico City, and more. We raised $6M from Accomplice, Inovia Capital, Kensington Capital Partners and Garage Capital.
About You:
4+ years of experience in software engineering (non-internship)
Located in Canada
Experience working on SaaS, marketplace, consumer or infrastructure
Entrepreneurial mindset
Growth-oriented attitudeย
Ambitions of excellence in your craft. Some of our past developers have grown into CTOs, principal engineers, and/or joined companies as the first engineer
We believe that language is a tool. Itโs more important that you have experience with one or more modern coding languages, than with any particular language itself.
You might also have:
Understanding of basic DevOps: AWS, GCP, Docker, Kubernetes/Terraform, CI/CD
Understanding of RESTful APIs and/or GraphQL
Understanding of cloud-native distributed systems and microservices
Our Commitment to Diversity & Inclusion:
As an early-stage startup, we know itโs critical to build inclusive processes as a part of our foundation. We are committed to building and fostering an environment where our employees feel included, valued, and heard. We strongly encourage applications from Indigenous peoples, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.
Please mention the word TOUGHEST when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMjU=). This is a feature to avoid fake spam applicants. Companies can search these words to find applicants that read this and instantly see they're human.
Salary and compensation
$110,000 — $140,000/year
Location
Canada
How do you apply?
This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMjU=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.