\nAbout Baubap\nWe are a fast-growing, Mexican fintech startup with the mission to become the bridge to peopleโs financial freedom through technology.\nWe are providing microloans to people in financial need through a fast and efficient process, always treating them with the respect and dignity they deserve.\nOur long-term vision is be the most inclusive digital bank in LATAM with more than 2.5 million clients.\n\n\n*We require that the candidate is fluent in Spanish and currently resides in the LATAM region, as it's important be willing to work under the Mexican Central Time Zone.\n\n About you\n\n๐ \n\nAre you ready to shape the future of personal micro loans? As a Backend Engineer focused on Financial Core Systems, you'll be at the heart of ensuring our backend's performance, stability, and reliability. Your work will directly streamline our operations and accelerate product development.\n\nYouโll collaborate with a dynamic and passionate team dedicated to problem-solving and continuous improvement. Your contributions will be instrumental in achieving our product goals, to ensure our customers experience the best feeling by using our products.\n\nYouโll take charge of important projects focused on improving functionality, helping us achieve our key objectives and results (OKRs). Your work will enhance user experiences and streamline processes through clear communication and quick updates. Join us and make a real impact in the micro-loan industry!\n\n\nAs Backend Engineer, these are the challenges that you will help us for solving\n\n\n* Work with the team to design and implement backend system architecture, focusing on scalability, maintainability, and efficiency.\n\n* Improve backend performance through code optimization and architectural enhancements.\n\n* Integrate external Financial and Compliance systems and services, including third-party products in a variety of APIs implementations.\n\n* Develop and maintain reliable and efficient APIs for server-to-client, server-to-server and event-driven communication.\n\n* Implement security best practices to protect against vulnerabilities and cyber threats.\n\n* Diagnose and resolve complex issues related to high-volume transactions in the product's backend.\n\n\n\n\n Day to day\n\n\n* Collaborate to design and implement scalable, maintainable, and efficient backend system architectures using micro-services.\n\n* Develop and maintain robust and efficient APIs for server-to-client, server-to-server and event-driven communication, ensuring performance under high traffic.\n\n* Optimize backend components to ensure optimal performance, scalability, and reliability. Conduct regular performance tuning and bug fixing.\n\n* Implement security best practices to protect against vulnerabilities and cyber threats.\n\n* Conduct code reviews and provide constructive feedback to maintain high code quality.\n\n* Ensure seamless communication and data exchange between different components.\n\n* Participate in regular team meetings, brainstorming sessions, and collaborative planning.\n\n* Stay updated with industry trends, emerging technologies, and best practices, to continuously improve development processes, tools, and methodologies to enhance productivity and product quality.\n\n\n\n\nRequirements \n\n\n* 5+ years of experience in backend development in a fast growing product.\n\n* Hands-on experience designing and building micro services architectures.\n\n* Proven ability to develop and maintain RESTful and/or GraphQL APIs.\n\n* Demonstrated skills in optimizing system performance, scalability, and reliability.\n\n* Excellent teamwork and communication skills, with the ability to work effectively with cross-functional teams.\n\n* Commitment to writing clean, maintainable, and efficient code, following industry standards and best practices.\n\n* Experience with monitoring, maintaining, and supporting backend systems in a production environment.\n\n* Ability to create and maintain detailed technical documentation for system architecture, design, and processes.\n\n* Experience with writing and maintaining unit, integration, and end-to-end tests to ensure system reliability.\n\n* Experience with designing and maintaining scalable data models in relational databases such as PostgreSQL and MySQL which the capability to ensure data integrity and precision.\n\n* \n\n\n\n\n๐ Nice to have\n\n\n* Experience with tech leading small groups of at most 6 people, giving them technical direction and support.\n\n* Expertise in putting together well designed solutions to support constant-growing financial platforms by implementing cutting-edge technology and patterns.\n\n* In-depth knowledge and experience working in the fintech industry.\n\n* Familiarity with STP (Sistema de Transferencias y Pagos) and its operations.\n\n* Understanding and experience with SPEI (Sistema de Pagos Electrรณnicos Interbancarios) and its implementation.\n\n* Expertise in disbursement processes, payment gateways, and financial transaction handling.\n\n* Knowledge of various financial products, particularly personal micro loans and their lifecycle.\n\n* Understanding of regulatory compliance requirements in the Mexican fintech sector.\n\n* Knowledge of risk assessment and management in financial services.\n\n* Experience applying machine learning techniques to financial data for fraud detection, credit scoring, etc.\n\n* Proficiency with Docker and Kubernetes for managing containerized applications.\n\n* Skills in data analysis and using tools like SQL, Python (Pandas), or R to extract insights from financial data.\n\n* \n\n\n\n\nWhat is our way of working?\n\n\n* We aim to be as product centric as possible, which means we always prioritise:\n\n* Listening to our customers (whether internal or external), mainly qualitatively and secondary quantitatively\n\n* Focusing on real problems our clients face\n\n* Strong focus on customer experience\n\n* Assuring that every product adds value to both, our business and our customers\n\n* Falling in love with the problem instead of the solution\n\n* Quick validation and learning\n\n* Strong collaboration within your team and other teams\n\n* Small, progressive, incremental delivery, innovation comes from iterations not from scratch.\n\n\n\n\n\n What we offer\n\n\n\n* Being part of a multinational, highly driven team of professionals\n\n* Flexible and remote working environment\n\n* High level of ownership and independence\n\n* 20 vacation days / year + 75% holiday bonus\n\n* 1 month (proportional) of Christmas bonus\n\n* Vales de despensa of 3,257 MXN / month\n\n* Health & Life insurance\n\n* Home office set-up budget\n\n* Unlimited budget for Kindle books\n\n* 2 psychological sessions/month with Terapify\n\n* Baubap Free Loan\n\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, GraphQL, Python, Docker, Senior, Engineer and Backend jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nMexico City, Mexico City, Mexico
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMy4yMi4yNDguMTkz). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Oyster and want to re-open this job? Use the edit link in the email when you posted the job!
## A little about us\n\nOysterโs mission: to unlock global talent by making cross border hiring easy. We want to spread great employment rights and benefits and help make them the norm for remote workers across the world.\n\n๐บ Remote working, flexible hours.\nโฑ Permanent, full time\n๐ต Competitive salary\n๐ฐ Early stage equity\n\nWe are a new, 100% distributed, startup building out our product offering in 2020. Weโre fully funded and weโre putting together a great team of industry veterans. Weโre a global company with team members in the UK, India, Germany, the USA, Finland, Latvia, and Mexico.\n\nHiring people internationally is complicated โ with interacting engineering, legal, financial, operations, and HR processes. Weโd like to find people who think this is as interesting a challenge as we do.\n\n## The role\n\nWe value making this an inclusive and diverse workplace, and we welcome applicants from marginalized groups โ the world over.\n\nWeโre looking for a senior backend engineer, that probably means at least five years experience working professionally in teams on web development projects.\n\nIf you join us now, youโll be in on the ground floor building our technology stack and product, helping to embed a positive team culture and build a great working environment.\n\nYou'll be working on architecting and implementing product features, as we rapidly expand our product offering.\n\nOur technology is (currently) Ruby, Rails, a GraphQL API, and ES6 with React, running on Heroku. This role is primarily on the backend side, though youโll be able to work elsewhere if you have skills in other areas (and we are also looking for a full-stack developer if that's more your style).\n\nYour job will be toย collaborate with product to develop and clarify requirements, and then take charge of implementing significant portions of the product as we expand our offering. Youโll be pairing with other engineers regularly to jointly develop implementation plans and share knowledge.\n\nSkill sets that are important:\n\n- Confident with Ruby, and youโve worked with JavaScript to an extent. You know HTML.\n- Youโve worked with relational databases like MySQL or PostgreSQL.\n- You can articulately discuss requirements and technical plans.\n- You have a good understanding of web standards and development best practices.\n- Youโre able to write robust, comprehensible code.\n- You've worked in agile teams of one kind or another.\n\nSkills that are not required but any of which would be a bonus:\n\n- Professional experience with GraphQL APIs\n- Using Rails\n- Cloud infrastructure experience\n- Web security expertise\n\nAnd finally:\n\n- You need to have a good home internet connection, or are able to get one.\n- You are a fluent English speaker\n\n## Benefits\n\n$60k-$80k depending on experience.\n\nFully-flexible hours\n\n30 days plus public holidays, or legal minimum in your region if greater. \n\nPlease mention the words **CLEAN ANTENNA GRIEF** when applying to show you read the job post completely (#RMy4yMi4yNDguMTkz). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$60,000 — $80,000/year\n
\n\n#Location\nGMT +/- 4
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.