This job post is closed and the position is probably filled. Please do not apply. Work for crate.io and want to re-open this job? Use the edit link in the email when you posted the job!
Location: Austria, Germany, Switzerland - preferred: Vienna, Dornbirn, Berlin, rest of Europe also possible (remote)\n\nPosition: Full-time employment\n\n**ABOUT CRATE.IO**\n\nCrate.io is the developer of CrateDB, a global leading-edge IoT database, extending the limits of time-series applications. The highly scalable distributed database solution combines the performance of NoSQL with the power and simplicity of standard SQL. Designed specifically to support machine data applications and IIoT, CrateDB is optimized for time series and industrial data and runs in the cloud on Azure and Amazon as well as on the edge and on-premise.\n\nWe are a VC and corporate funded global technology company in the IOT space, both Forbes and Gartner have recognized us as cutting edge. The company is well funded with $10M+ of fresh capital we raised in our latest financing round this year. We are gearing up for hyper-growth with offices in the USA, Germany, Austria and Switzerland.\nIn addition to the CrateDB, we are developing a leading edge IOT Platform to enable Smart manufacturing and Industry 4.0 initiatives with our customers globally. The solution is live today with the first lighthouse customer and expected to commercially launch later this year.\n\nWith our Analytics Platform, we are leveraging the power of CrateDB for discrete manufacturing use cases to enable the frontline workers by providing a โdigital friendโ to them. We build the next generation of analytics processes and tools to enable efficiency on the factory floor and help rollout teams and integration specialists to roll this out to their factories.\n\n**ABOUT THE ROLE**\n\nWe are looking for a Frontend Software Engineer to strengthen our team. This role will report directly to the Lead Engineer. \n\n**WHAT YOU GET OUT OF THIS OPPORTUNITY**\n* Join a leading VC funded tech company from the pre-B stage through the exit\n* Contribute demonstrable business impact to Crateโs growth\n* Be part of an open, collaborative culture with โCratiesโ from diverse backgrounds\n\n**WHAT YOUโRE RESPONSIBLE FOR**\n\n* Maintain, improve and extend the code base of our front-end applications that power our IoT analytics platform\n* Implement web applications using state of the art front-end frameworks\n* Work closely together with the UI/UX designer\n* Ensure quality through test driven development\n* Think about performance and security as keys to build sustainable products\n* Do research and derive customer implementation\n\n**YOUR SKILLS**\n\n* Proven experience as a frontend engineer or similar role (5 years+)\n* Experience interacting with REST and GraphQl APIs\n* Experience with CI/ CD procedures and tools\n* Excellent analytical and creative problem-solving skills\n* Strong working experience with React and react native, other common web stack technologies nice to have (Javascript, HTML,CSS, โฆ)\n* Comprehensive understanding of service architecture\n* High affinity to transform customer needs into software\n* Familiar with testing frameworks like Jest or Cypress\n* Craftsmanship to continuously improve and take over responsibility for existing code\n* Fluent English\n\n**NICE TO HAVE**\n\n* Work experience with agile methodologies, such as Scrum\n* Experience with CrateDB\n* Additional language skills such as Python\n\n**WHAT WE OFFER**\n\n* Competitive compensation\n* Flexible working hours\n* A variety of perks (e.g., financial allowances for public transportation, fitness, and education)\n* Participation in our Employee Stock Options Plan\n* The opportunity to become part of one of the most exciting startups in the IT scene (Winner of the 2021 [IoT Evolution:](https://crate.io/press/crate-io-receives-2021-iot-evolution-industrial-iot-product-of-the-year-award/) Industrial IoT Product of the Year Award)\n\n\nAt Crate.io, we don't just accept difference - we celebrate it and support it. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. \n\nPlease mention the words **BEACH THAT STOOL** when applying to show you read the job post completely (#RMTguOTcuOS4xNzU=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$80,000 — $110,000/year\n
\n\n#Location\nEurope
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMTguOTcuOS4xNzU=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
## Full-time. Fully remote within CETยฑ2. Still hiring post-Covid! ๐\n\nJust is a FinTech company building SaaS products for corporate treasury. We help CFOs and finance teams in large multinational companies forecast and manage their financial risk.\n\nWe launched our first foreign exchange analytics solution in August 2019 and already serve +20 major corporate customers and 2 non-profits.\n\nWeโre currently developing a โliquidity forecastingโ tool which lets companies forecast how much money theyโll have in the bank in the future, and stress test this forecast against various global events.\n\nWe have a great product development team and are looking for an experienced front-end developer to join us so that we can build top-notch user experiences for our customers more quickly.\n\n### ๐ What we offer\n\n- Join a FinTech startup at the sweet spotโearly enough that you can still help shape the company, but established enough to offer good job stability and growth prospects.\n- 100% remote working, unless youโd like to live in Oslo (itโs nice! ๐ณ๐ด๐๏ธ๐ฒ๐ณ๏ธโ๐)โand weโll buy you some decent home office equipment.\n- Regular opportunities to get together with the whole company somewhere fun ๐๏ธ\n- โฌ65-75k salary, wherever you areโwe wonโt low-ball you for being in a country with a lower cost of living.\n- Stock options, because we want it to be your company as well as ours ๐\n\n### ๐ท๐พโโ๏ธ What youโll be doing\n\n- Youโll spend most of your time in the first months developing our liquidity management productโwe have customers pre-committed to this, so weโre eager to launch as soon as we can.\n- Youโll primarily be responsible for the web client and GraphQL server, but will likely get involved with other things too.\n- Youโll work with our other engineers to come up with the right overall architecture for our solution, and design gRPC APIs that make sense for the front-end.\n- Weโll want you to develop UI test coverage. We have good automated test coverage of our backend services, and front-end unit tests, but weโd like to start running UI tests with Puppeteer or similar.\n- Youโll also lead the design and implementation of a real-time collaboration feature, using something like ShareDB.\n- Weโll spend time helping you to understand our business and archetypical customer in detail. Our engineers donโt just follow instructionsโthey have their own vision of the product and are always looking to find ways to do things better.\n\n### โ The requirements\n\n- You should have extensive experience developing complex web applications with React, Redux and TypeScriptโweโd love to see some cool things you made!\n- You should also have worked with GraphQL.\n- You should be good with CSS and familiar with preprocessors.\n- Youโll need an eye for detail and can build things that don't just work, but look and feel great too.\n- You need to practice modern software development techniques such as unit testing, continuous integration & distributed version control.\n- You need to be within ยฑ2 hours of the CET timezone, because we think remote collaboration is really important.\n- We want you to be a fun person to work with! We believe that working together as a team is the most important thing for success.\n\n### ๐๐พ Also good if\n\n- You have some backend development experience, especially with Go. We support working across the full stack for people who are interested.\n- You've worked with Web Components, using Stencil.js or similar.\n- You have publicly available projects and code that we can take a look at.\n\n### ๐พ Technologies we use\n\n- *Frontend:* React, Redux, TypeScript, Stylus, GraphQL\n- *Backend:* Go, Java 11, gRPC, RabbitMQ, Open Policy Agent, PostgreSQL\n- *Platform:* Google Cloud Platform, Docker, Kubernetes\n- *Tooling:* Your choice of new laptop, GitLab, Bazel\n\n### ๐ Applying\n\nFeel free to send us your CV at [[email protected]](mailto:[email protected]), along with a link to something cool you've built previously that we can check out.\n\n*(Direct applicants only. We're not open to outsourcing firms or recruiters, sorry.)* \n\nPlease mention the words **TEXT MONSTER CLAW** when applying to show you read the job post completely (#RMTguOTcuOS4xNzU=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to JavaScript, React, Senior, Engineer, Full Stack, GraphQL, Front End, Developer, Digital Nomad, Finance, Java, Cloud, CSS, SaaS and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Location\nCET ยฑ2 timezone
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.