Prominent Edge is seeking highly talented and passionate Software Engineers to join our phenomenal software development team. We are a 100% remote company, so successful candidates must be highly self-motivated and capable of working independently. We do, however, share knowledge and talk across projects and topics on a daily basis. Since we are primarily a software engineering services company, you'll have exposure to a variety of technologies as opposed to having to work on one product or stack for too long. \n \nWe've been a 100% remote company before it was cool to be remote. We hire the best talent and always strive to exceed expectations. We leverage best-of-breed open source technologies to provide our customers with innovative user centric solutions. We invest in our company culture and make sure that we have fun. We also have exceptional benefits such as free quality healthcare for your entire family. If this sounds like the type of environment in which you would thrive, and you qualify for the position below, please apply -- weโd love to hear from you! Visit our careers page. (https://prominentedge.com/careers) to learn more.\n\n**Required Skills**\n* 5+ years experience as a Full-Stack Software Engineer, experienced working in an Agile development environment\n* Experience leading project teams through the full development life cycle, including requirements analysis, architecture, design, coding, testing, and delivery of solutions\n* Front-end development skills using modern JavaScript frameworks, such as ReactJS/React Native, Angular/AngularJS, or Vue\n* Backend development skills using server-side frameworks, such as NodeJS/Express, Flask, Django, or Spring\n* Database skills (e.g., Elasticsearch, Postgres/PostGIS, SQLite, MySQL, SQL Server, MongoDB, Redis, etc.)\n* Excellent interpersonal and communication skills (both written and oral)\n* Highly self-motivated and results-oriented team player\n* Unwavering integrity and commitment to excellence\n* BS degree in Computer Science or related field, or equivalent work experience\n\n**Additional Skills (โNice to Haveโ)**\n* Open source geospatial technologies, such as Mapbox GL, GeoServer, etc.\n* Data visualization using technologies such as D3, Kibana, etc\n* Containerization and container orchestration, preferably using Docker and Kubernetes\n* Cloud computing, especially using AWS services such as S3, RDS, SQS, EMR, or Kinesis \n* Serverless approaches, preferably using AWS Lambda and Serverless Framework\n* DevOps and Continuous Integration / Continuous Delivery (CI/CD), using technologies such as Jenkins or AWS CodeBuild\n* 3D game engine or 3D web experience, using technologies such as CesiumJS, WebGL, Unity, or Unreal\n* Advanced technologies (machine learning, computer vision, image processing, data mining, data analytics), using tools such as TensorFlow, PyTorch, or Apache Spark\n* Scrum Master\n* Active (or ability to obtain) Security Clearance\n* Advanced degree (MS or MBA)\n\n**Benefits**\n* Six weeks paid time off per year\n* Six percent 401(k) matching, vested immediately\n* Company-paid, low-deductible healthcare for the entire family\n* Straight-time overtime pay for all employees\n* Flex time (i.e., adjust your hours to fit your schedule)\n* Paid training, courses, and conferences\n* Laptop upgrades\n* Work from the comfort of your own home!\n* This organization participates in E-Verify \n\nPlease mention the words **BORING JELLY ICON** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMQ==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$100,000 — $180,000/year\n
\n\n#Location\nWorldwide
# How do you apply?\n\nFill out the application at the link and apply online or give me a call at (703) 801-0976!
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Flightradar24 and want to re-open this job? Use the edit link in the email when you posted the job!
Flightradar24 is looking for experienced, well-rounded C++ and Python developers. You will play an important role in building and improving our back-end systems, processing very high volumes of aviation data like flight positions each day.\n\n\n\nYour work will improve the experience of millions of daily Flightradar24 users, as well as enable our business and enterprise customers to effectively integrate our aviation data and services with their businesses.\n\n\n\n**What youโll do**\n\n\n\n* Develop our back-end systems using modern C++ and Python on a Linux platform, using open source tools like RabbitMQ, Redis, and MySQL, as well as cloud services\n\n* Improve our flight tracking coverage with semi-embedded C++ development for our global network of Raspberry Pi-based ADS-B receivers (approximately 20,000 devices)\n\n* Design and implement big data streaming, ingestion, and event processing using both cloud and on-premise SQL and NoSQL systems\n\n* Expand our flight event detection logic and find new ways to use our data\n\n* Improve robustness of our systems using cloud infrastructure like AWS and Azure, and tools like Terraform, Ansible, Docker, and Kubernetes\n\n* Apply analytic and algorithmic skills to solve software design and aviation tracking challenges\n\n\n\n**Who you are**\n\n\n\n* Experienced software engineer with at least 4 years of professional development, ideally in online/web services environments and with similar tech stacks\n\n* Experience with modern C++ '11-'20, STL and Boost, Python 3, as well as an understanding of data structures, algorithms and their use cases and efficiency\n\n* Passionate about development best practices and quality efforts, such as test-driven development, unit testing, code reviews, continuous integration, etc.\n\n* You know how to design simple, performant, testable, and maintainable software\n\n* You love what you do and are passionate about code and technology\n\n* You have a university degree in computer science or similar\n\n* You have strong written and spoken English โ Swedish is not needed for this role\n\n* If you have experience with aviation data standards including ADS-B, thatโs a big plus\n\n* *Note that this is a fully remote position, but we would like you be located within a 3-hour time difference from Sweden's timezone (CET/CEST) to align your working hours with the rest of the team.*\n\n\n\n**About Flightradar24**\n\n\n\nWith over 2 million daily users, Flightradar24 is the worldโs most popular flight tracking service and our apps regularly top the App Store and Google Play charts. We also offer a wide range of commercial services and customers include many of the largest names in aviation. \n\n\n\nWe're constantly adding new services and improving existing products. To help us meet those challenges, we're looking for creative, collaborative and tech-savvy applicants to join us. \n\nPlease mention the words **RIDGE FISH FORK** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMQ==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nCET +โ 3 hours
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMQ==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Hirestarter and want to re-open this job? Use the edit link in the email when you posted the job!
\nExten Technologies is a growing startup in north Austin focused on providing an open standard based, high-performance storage solution to cloud service providers, OEMs, and ODMs. \n\nPrincipal Engineer who is an expert in C and C++ and is passionate about learning storage related technologies that are leading a cloud data center revolution.\n\nABOUT THE JOB\n\nUse the best of object-oriented and functional language techniques when building APIs/logic\n\nDesign robust solutions to hard problems that consider scale, security, reliability, and cost\n\nEnsure code and design quality through the execution of test plans\n\nDevelop coding standards, methodology, and repeatable processes\n\nStrong attention to detail and understanding of the latest techniques and patterns to provide a leadership perspective on front and backend technologies and their overall impact\n\nProvide technical leadership at a project level\n\nMentor and teach associate or junior developers\n\nWHAT YOU NEED\n\n5+ years of relevant experience\n\nExpert in C and C++\n\nExperience with Linux\n\nSystem Software Programming\n\nExperience with firmware and programming drivers\n\nServer hardware platform experience\n\nBS/MS Computer Science or Electrical Engineering\n\nJavaScript, python scripting\n\nMastery of multi-threaded design and performance issues for high-performance applications\n\nAbility to research and implement complex algorithms, creating concrete implementations from theoretical designs\n\nAbility to understand existing industry implementations in open source and evaluate the benefits of various approaches\n\nFull understanding of computer system performance, including hardware and processor features that may be leveraged for optimized implementations\n\nExperience with profiling and tuning system-level performance issues\n\nExperience using agile/scrum process to develop software systems\n\nExperience designing application architectures, creating project estimates, defining scope requirements, and structuring projects\n\nAbility to work quickly while maintaining strong attention to detail and accuracy\n\nStrong communication and organizational skills with the ability to thrive in fast-paced production deadline-driven environment both internal and client facing Mastery of data structures design trade-offs, and applying complex data structures to specific problems \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to C, C Plus Plus, Senior, Engineer, Developer, Digital Nomad, Cloud, Python, Junior and Backend jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.