This job post is closed and the position is probably filled. Please do not apply. Work for Tucows and want to re-open this job? Use the edit link in the email when you posted the job!
Tucows Wireless Services is building a new modern SaaS platform to help wireless companies operate around the world. The rollout of 5G networks in the US and elsewhere provides opportunities for innovation in the telecom space. And as an Internet company, our culture, experience and approach position us well to take advantage of these.\n\nIf you are a seasoned software engineer and this sounds like something you might like to be a part of, we may have a job for you!\n\n# Who You Are:\n* You enjoy greenfield software development\n* You excel at solving problems at scale for millions of people\n* You want to work with cutting edge technologies\n* You want to work with and learn from some of the smartest minds in the industry 3+ years of professional experience in software development\n* In depth knowledge of at least one of Ruby, Python, Go, or Scala\n* Solid understanding of Event Driven Architecture\n* Experience with Kafka\n* Experience with writing distributed applications\n* Experience with API application development\n* Experience with Data Persistence at scale using Postgres, Redis\n* Experience with writing highly testable code\n* Experience with writing highly observable systems using any of Prometheus, Grafana, Datadog, New Relic, etc\n* Experience with Infrastructure as Code using technologies such as Terraform, Nomad, Kubernetes, Docker, etc\n\n# Who You Might Be: \n* Demonstrated performance tuning skills\n* Strong troubleshooting skills\n* Advocates for automated testing and observability\n* Experience working in an agile environment\n* Contributions to an open-source project (of any kind)\n* An account on Github.com with samples of your code\n* Experience in the telecoms space\n\n# About Tucows\nTucows (NASDAQ:TCX, TSX:TC) is possibly the biggest Internet company youโve never heard of. We started as a simple shareware site in 1993 and have grown into a stable of businesses; mobile, internet and domains.\n\nWe embrace a people-first philosophy that is rooted in respect, trust, and flexibility. We believe that whatever works for our employees is what works best for us. Itโs also why the majority of our roles are remote-first, meaning you can work from anywhere you can connect to the internet!\n\nToday, close to a thousand people work in over 16 countries to help us make the Internet better. If this sounds exciting to you, join the herd!\n_______________________\n\nWe offer a competitive compensation and benefits package with invested growth opportunities. If you are ready to be part of a fast-growing technology company where you determine your future, we want to hear from you.\n\nWant to know more about what we stand for? At Tucows we care about protecting the open Internet, narrowing the digital divide, and supporting fairness and equality.\n\nWe also know that diversity drives innovation. We are committed to inclusion across race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status or disability status. We celebrate multiple approaches and diverse points of view.\n\nWe will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.\n\nLearn more about Tucows, our culture and employee benefits on our site [here](https://tucows.com/careers/).\n \n\nPlease mention the words **TWIST FINAL WEALTH** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$100,000 — $150,000/year\n
\n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Argyle [PyJobs] and want to re-open this job? Use the edit link in the email when you posted the job!
Argyle is a remote-first, Series A fast-growing tech startup that has reimagined how we can use employment data.\n\nRenting an apartment, buying a car, refinancing a home, applying for a loan. The first question that they will ask you is, "how do you earn your money?" Wouldnโt you think that information foundational to our society would be simple to manage, transfer and control? Well, itโs not!\n\nArgyle provides businesses with a single global access point to employment data. Any company can process work verifications, gain real-time transparency into earnings and view worker profile details.\n\nWe are a fun and passionate group of people, all working remotely across 19 different countries and counting. We are now looking for Senior Backend Engineers to come and join our team.\n\n## What will you do?\n\n - Experience and a big passion for API design, scalability, performance and end-to-end ownership\n - Design, build, and maintain APIs, services, and systems across Argyle's engineering teams\n - Debug production issues across services and multiple levels of the stack\n - Work with engineers across the company to build new features at large-scale\n - Managing k8s clusters with GitOps driven approach\n - Operating databases with large datasets\n - Concurrent systems programming\n\n## What are we looking for\n\n - Enjoy and have experience building APIs\n - Think about systems and services and write high-quality code. We work mostly in Python & Go. However, languages can be learned: we care much more about your general engineering skill than knowledge of a particular language or framework.\n - Hold yourself and others to a high bar when working with production systems\n - Take pride in working on projects to successful completion involving a wide variety of technologies and systems.\n - Thrive in a collaborative environment involving different stakeholders and subject matter experts \n\nPlease mention the words **ROCKET FLOOR RAIL** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$40,000 — $60,000/year\n
\n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Vizibl and want to re-open this job? Use the edit link in the email when you posted the job!
**At Vizibl, weโre on a mission to help every company work better, together. We want to help all companies make a difference in the world by revolutionising the way they work together, empowering them to reach their full potential.**\n\nWeโre off to a great start too. Teams in some of the worldโs largest enterprise companies are already collaborating with their suppliers through Vizibl and transforming the way they work to drive innovation together.\n\nWe welcome people from all backgrounds who seek the opportunity to help build a future where every company sees the benefit of working openly and collaboratively. If you have the passion, curiosity and collaborative spirit, work with us, and letโs help every company work better, together.\n\nVizibl is a growing SaaS platform used by the worldโs largest organisations to help change the way they work. Our unique blend of Enterprise know-how coupled with our beautiful and usable products is one of the things our customers love about us.\n\nVizibl is looking for a talented Back End Engineer thatโs passionate about building scalable, maintainable, performant backend services that put security first without compromising on our commitment to openness. As Vizibl grows, so do our ambitions for the future of our backend services, which is why this is a great opportunity for the right person to join a talented team to drive exciting new projects that will help change the way the worldโs largest companies work with each other.\n\nThis person will work across our backend services to help maintain our REST api, develop solutions to new problems, be involved in the design and architecture of the platform and work collaboratively to support the growth of the platform. The ideal candidate is a self-motivated person that cares deeply about building excellent products. They donโt settle for OK and have a desire to integrate themselves deeply into the working of the business.\n\nAs this is a fully remote position we'll be looking for strong communication skills and the ability to motivate yourself and your team to work independently.\n\nIf youโre interested in building products that challenge the status quo in the enterprise space and you enjoy an abundance of autonomy with just the right amount of alignment then weโd love to hear from you.\n\n**Open to Everyone**\n\nVizibl is proud to be an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.\n\n**Working for Vizibl you will..**\n* Have a huge amount of autonomy\n* Work remotely\n* Work with cutting edge technologies\n* Manage and support applications in production on our Kubernetes cluster\n* Contribute to the design and architecture and development processes for a system used by the worldโs largest enterprise organisations\n* Be involved in the planning and development of solutions\n* Be an ambassador for our product values\n* Work with an amazing team of people spread out across Europe\n* Contribute to a positive and empowering company culture\n* Help to build and improve a platform used by some of the worlds biggest organisations \n* Get support to grow and develop in your career\n\n**What Youโll Need**\n* Have experience working in a professional engineering team\n* 3+ years of Python experience (strong candidates with experience in another language may be considered)\n* Experience building production-ready REST APIs\n* Strong skills in information security architecture and security best practices\n* Understanding of data modelling and querying in (Postgres) SQL\n* Experience with Git\n* English fluency and excellent communication skills\n* Experience with TDD/BDD methodologies\n* A desire to learn and improve\n* Be organised and self motivated\n* Be a great team player. Our product squads work very closely together to build solutions \n\n**Weโll be impressed if**\n* DevOps experience with Docker, Kubernetes, Google Cloud, etc\n* You have experience working in an agile team\n* You have experience working in a remote team\n* You have experience with queuing systems like Celery, Kafka\n* You have worked on products that have been subject to regular security audits\n* You write about back-end technologies\n* You have frontend Javascript experience\n* You have experience architecting complex systems\n* You have experience scaling web applications\n* Youโre familiar with the enterprise project management space\n* Youโve integrated with large corporate IT environments before\n\n**Benefits**\n* Huge amounts of autonomy\n* Flexible working\n* Work from anywhere\n* Competitive compensation packages\n* Options in a growing SaaS business\n* Work with a great team\n* Great carrer development opportunities\n* Annual retreats \n\nPlease mention the words **DIRT INTEREST TRY** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$60,000 — $80,000/year\n
\n\n#Location\nEurope
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xNDY=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.