This job post is closed and the position is probably filled. Please do not apply. Work for Civiqs and want to re-open this job? Use the edit link in the email when you posted the job!
LOCATION: Remote within the U.S.\n\nCiviqs is the leading online scientific polling platform and a division of Kos Media LLC. Civiqs has been conducting surveys online since 2014. Every day, Civiqs surveys thousands of people across the United States on politics, culture, and current affairs. With years of daily responses on a huge array of questions, Civiqs maintains one of the largest databases of public opinion in the United States. The scale and quality of Civiqsโ public opinion data, and its online survey panel, is unmatched in the survey industry.\n\nWe are hiring an experienced and results-driven Senior Software Development Engineer to join our talented remote engineering team. As Senior Software Development Engineer, you will ensure that the research platform is operating smoothly and accurately, and that new features are delivered on time and to specification. You will work alongside researchers and data scientists to help shape the Civiqs survey application as we expand our development team and build new research products and features.\n\nWe have an energized team of survey researchers with diverse backgrounds and skill sets. If youโre interested in a position that offers more than just a technical challenge, weโd like to hear from you. The ideal candidate for this position will love data as much as we do! You are an innovator, collaborator and have an innate dedication to excellence. You are self-motivated, efficient, and capable of delivering results with limited guidance. Come join our team and guide the future of public opinion research using the Civiqs panel!\n\nResponsibilities\n\n- Design, architect, code, and maintain application solutions for Civiqs using industry best practices\n- Maintain and optimize Python / PyMC modelling application, and data science pipeline\n- Have a deep understanding of our frontend and backend systems, infrastructure, cloud services, and dev ops automation tools\n- Clearly and precisely communicate technical issues and establish day to day priorities with other developers and non-technical stakeholders\n- Mentor other developers on the team, through pairing and direct feedback\n- Kick off and lead technical team decision making in collaboration with research and engineering management\n- Quickly identify and address bugs, anticipate run time issues involving code changes that may affect extremely large data sets\n- Write detailed automated test cases for new features\n- Work collaboratively with the engineering team and lead complex releases often involving multiple systems and large data migrations\n- Partner with research team members to ensure that documented requirements meet the teamโs needs, and ensure that development priorities are aligned\n- Remain current on test, development, and deployment best practices\n\nExperience\n\n- 8+ years experience in professional software development using Python and/or Ruby and Javascript\n- Experience with Python data science, PyMC, Pandas, Spark, etc. preferred\n- Production experience with ReactJS would be a bonus\n- 4+ years experience working on an Agile, Kanban, or similar collaborative environment\n- Experience working with fully remote teams preferred.\n\nQualifications\n\n- Experience maintaining and developing new features in large and complex codebases\n- Working knowledge in systems or operations at OS and basic networking levels\n- Ability to write, run, and optimize raw SQL queries in MySQL or PostgreSQL; knowledge of other data storage a plus\n- Experience with containerized application development using Docker is preferred\n- Experience measuring system performance and implementing security best practices\n- Strong track record developing software using automated testing tools\n- Awareness of typical programming errors and the unexpected things users do whether accidentally or maliciously\n- Ability to analyze and debug distributed data processing systems\n- Motivated, organized, and self-directed technical leader\n- Critical thinker with thirst for knowledge and continuous improvement\n- Ability to work autonomously, take ownership, and deliver a quality software experience\n- Excellent communication skills and comfortable talking with team members at all levels\n- Willingness to become a Civiqs platform expert\n\nSALARY RANGE: $130,000 - $165,000\n\nThis position is a 40 hour/week, full-time exempt position and reports to Civiqsโ Engineering Manager. Candidates must be legally eligible to work in the United States. The position offers a flexible work environment, the ability to work remotely or from home, competitive salary, excellent benefits including: full medical, dental and vision benefits, optional 401K with a company match, remote worker stipend, a generous vacation package, traumatic grief leave, a professional development stipend, as well as employer-paid maternity/family leave. Our organizational commitment to personal growth and work-life balance reduces churn and encourages a very rewarding long term position.\n\nAt Civiqs, we believe that the diversity of ideas, experiences and cultures that our employees contribute to our organization help us be more effective in our work, and we are proud to be an inclusive and equal-opportunity workplace. The atmosphere in our office is energized by the dayโs news events, and people united by common cause. Weโre a company that loves learning and supports growth and training for all our employees.\n\nWomen, people of color, and LGBTQ+ individuals are strongly encouraged to apply. \n\nPlease mention the word **HOT** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMA==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$130,000 — $160,000/year\n
\n\n#Location\nUnited States
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Fleet and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nLet's start with why we exist.\n\nFleet builds open source software to manage and secure computing infrastructure: employee laptops, cloud servers, and more. Our technology helps IT and security teams build trust within their organization, while getting their jobs done more effectively.\n\nFleet is an all-remote company with experienced founders, including two creators of popular open source projects and a compelling lead investor. Our business model is inspired by the success of GitLab and Elastic, and we have incredible early customers ranging from startups to Fortune 500 companies with hundreds of thousands of endpoints.\n\n\nWhat happens when you join us?\n\n\n* As the first senior engineering hire, this position offers huge potential for growth.\n\n* You will write significant open source code, merging commits in your first days at the company.\n\n* You will work closely with the CTO and CEO to define technical and product vision.\n\n* Over time, you will establish yourself as a leader in Fleet's growing team and user community, whether through management or expert-level individual contributions.\n\n\n\n\n\nWhy should you join us?\n\n\n* Work from anywhere with good internet. (We're 100% remote. No office. No commute.)\n\n* Help make endpoint monitoring less intrusive and more transparent.\n\n* Safeguard the production servers and employee laptops of Earth's largest companies.\n\n* Build greenfield features and make key technical decisions that go live in days.\n\n* Most (if not all) of the code you write is public and highly visible at github.com/fleetdm/fleet.\n\n\n\n\n\nAre you our new teammate?\n\n\n* You are competent with source control in Git. You have great written communication skills.\n\n* You can mentor other developers and do code reviews. Maybe you managed open source projects before; maybe you collaborated closely with more junior engineers at work.\n\n* You look forward to working with designers to improve the user experience of stuff you work on.\n\n* You bring senior talent to our team and open source community, with 4+ years of equivalent experience in one or more of the Engineering Foundations below (and interest in digging into the others).\n\n* Nice to have: Experience working on an all-remote, distributed team.\n\n* Nice to have: Experience working in IT operations and/or cybersecurity.\n\n* Nice to have: Experience working with Mobile Device Management (MDM) APIs.\n\n* Nice to have: Experience deploying/monitoring/managing containers with Docker/K8s.\n\n\n\n\n\n\nEngineering Foundations\n\nAn ideal senior candidate has 4+ years equivalent experience in one or more of the following three engineering foundations (and interest in digging into the others):\n\nI. Frontend\n\nFleet’s frontend is a single page application (SPA) written in JavaScript with React and Redux. We strive for “convention over configuration”, offering a user experience that helps security and IT staff enjoy their jobs. There are many interesting challenges in helping our users understand the data collected from their laptops and servers.\n\n\n* Experience building and architecting SPAs in JavaScript/Typescript (2+ years of equivalent experience with React specifically.)\n\n* Expert CSS skills (we use Sass)\n\n* Ability to recommend and implement frontend testing patterns (E2E tests, etc.)\n\n* Nice to have: Experience developing responsive applications.\n\n* Nice to have: Familiarity with frontend performance profiling and optimization.\n\n* Nice to have: Experience building data visualizations (graphs, charts, etc.)\n\n* Nice to have: Experience with React Native, Electron.js, or similar.\n\n\n\n\nII. Backend\n\nFleet’s server is written in Go with go-kit. Deployments range from single servers to over 100,000 clients connected to horizontally scaled Fleet servers, handling tens of thousands of requests per minute. We aim to keep Fleet’s deployment as simple as possible to ease self-hosted deployment. MySQL and Redis are used for persistence and caching.\n\n\n* Experience building scalable, production quality servers.\n\n* Ability to recommend and implement backend testing patterns (E2E tests, etc.)\n\n* Familiarity with server and SQL performance profiling and optimization.\n\n* Familiarity with database migration strategies.\n\n* Nice to have: Experience programming with Go and go-kit.\n\n* Nice to have: Experience with Redis and/or MySQL.\n\n* Nice to have: Experience deploying and operating hosted SaaS services.\n\n* Nice to have: Experience working with Mobile Device Management (MDM) APIs.\n\n* Nice to have: Experience deploying/monitoring/managing containers with Docker/K8s\n\n\n\n\nIII. Endpoints\n\nFleet builds on top of the osquery agent (osquery.io), a Linux Foundation OSS project. Our CTO co-created osquery and serves on the Technical Steering Committee. On the endpoint we are building Orbit, a wrapper for osquery that will also become our platform for deploying additional open-source software such as Fleet Desktop (an interface for device users to interact with Fleet).\n\n\n* Experience developing applications on macOS, Linux, and/or Windows.\n\n* Familiarity with packaging tools: macOS .pkg, Linux .deb, Linux .rpm, Windows .msi, etc.\n\n* Familiarity with service persistence: macOS launchd, Linux systemd, Windows Services, etc.\n\n* Experience with building cross-platform user interfaces.\n\n* Experience managing and debugging performance of software installed on endpoints.\n\n* Readiness to write code that will directly impact performance for hundreds of thousands of people.\n\n* Nice to have: Go (for Orbit) and C++ (for osquery) programming experience.\n\n* Nice to have: Experience building and securing update systems for endpoint software.\n\n* Nice to have: Experience with low-level system APIs in macOS, Linux, and/or Windows.\n\n\n\n\nPlease include a few sentences about your experience with the Engineering Foundations in the "Experience" box of the application. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Developer, Digital Nomad, React, JavaScript, Cloud, CSS, Mobile, Senior, Junior, SaaS, Linux and Backend jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMA==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
# Role\nWe are looking for a passionate Backend Engineer that wants to own and help shape our microservices architecture. The main responsibility for this role is to produce high quality code that is robust, including designing data models for scalability and performance.\n \nOne area our application is growing into is Documents management, so having previous experience in similar problems areas, like revision and approval workflows, auditing of projects, access control levels (ACL) is a big plus for this position.\n\nPrevious experience on managing projects and leading a team of developers is a prerequisite for this role. You will be working as part of a small and highly effective 100% remote team, taking part in major technical decisions, defining technical requirements, scoping and helping to create new features for our Bimsync application with the team. \n\nAs one of the lead developers in the team, you are expected to have a good structural thinking, being able to break-down complex problems into smaller parts that can be delivered to production independently in a timeboxed fashion. \n# What we are looking for\nWe believe in a growing and learning mindset, where people will be up to face a new challenge and learn a new technology when needed. Thus, having the right skills and a positive attitude to learning is more important for us than the degree. A BS/MS degree in Computer Science, Engineering or related subject is important, as we value an understanding of the fundamentals, but itโs not a prerequisite.\n\nWe define ourselves as an agile company, so being open for feedback and adapting to change is core to being a good fit to our team. Our company's working language is English, so we expect all candidates to effectively communicate (written and spoken).\n\nAs a lead developer, you are expected to be an excellent communicator, able to consider different points of view (Product, Sales, other devs) and manage these expectations in order to settle and deliver on a plan.\n\nYou should have a very good understanding of backend technologies and the different challenges of creating backend services that are cloud native, having previously developed applications in the public cloud (AWS preferably). We expect a strong intimacy with Java, as most of our backend stack is written in this language. We also look for proficient knowledge of databases, both SQL and NoSQL, to tune and shape the best data structures for each specific microservice. Another important point is having familiarity with REST APIs and a good understanding of backwards compatibility for public APIs. An interest and an ability to fix and contribute to the frontend React code is a bonus point for this position.\n\nThis position is open for fully remote, but has a requirement of being located in the EU.\n# What we offer\nYou will be part of an amazing journey to transform the construction industry with a bold and caring team. Along this path, we will challenge each other, have difficult open conversations and develop, as we learn.\n\nIn addition to the above, we will also take care of you and provide you with the right challenges for growth. Some of our benefits:\n\n* Paid time off (25 vacation days per year)\n* Flexible working hours\n* Full remote possible\n* Share options program\n* Company gatherings twice a year (in person when possible again)\n\n# Location\n**EU**\n\n# About Catenda\nCatenda is a Norwegian scale-up company with a global ambition to make the construction industry data-driven, with less waste and greater transparency along the way.\n\nOur company values are: Openness and Quality.\n\nWe believe in open standards for all our customer data to achieve interoperability between applications, from inception, through design and construction onto the maintenance of a building. Another core belief is that our customers should have full control over their data: all data that goes in, can also be exported out.\n\nWe value quality of the code and the product by working as a tight and effective distributed development team, preferably asynchronously, often sharing screens to collaborate. Catenda has offices in Oslo and Bergen, composed by an international team of 10+ nationalities working remotely across the EU.\n\nOur solution, Bimsync, is a cloud-based collaboration platform, consisting of a web application, a mobile application and our APIs. Many companies across the world are using our products to build better airports, hospitals, stadiums, homes and roads.\n\nOur technology stack runs 100% on AWS, using the most appropriate technology for the problem at hand. Our backend is mostly in Java and we are currently using different managed databases in our backend, like PostgreSQL, MySQL and DocumentDB. Our frontend is in Javascript, using React and React Native for our mobile app (iOS and Android).\n\n \n\nPlease mention the words **FEEL HEALTH ANTIQUE** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMA==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$80,000 — $110,000/year\n
\n\n#Benefits\n
โฐ Async\n\n
\n\n#Location\nEU
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.