\nAbout You\n\n\nWe are seeking a skilled Analytics Engineer to join our dynamic Data Team. The ideal candidate will have a comprehensive understanding of the data lifecycle from ingestion to consumption, with a particular focus on data modeling. This role will support various business domains, predominantly Finance, by organizing and structuring data to support robust analytics and reporting.\n\n\nThis role will be part of a highly collaborative team made up of US and Brazil-based Teachable and Hotmart employees.\n\n\nWhat Youโll Do\n\n\n* Data Ingestion to Consumption: Manage the flow of data from ingestion to final consumption. Organize data, understand modern data structures and file types, and ensure proper storage in data lakes and data warehouses.\n\n* Data Modeling: Develop and maintain entity-relationship models. Relate business and calculation rules to data models to ensure data integrity and relevance.\n\n* Pipeline Implementation: Design and implement data pipelines using preferrable SQL or Python to ensure efficient data processing and transformation.\n\n* Reporting Support: Collaborate with business analysts and other stakeholders to understand reporting needs and ensure that data structures support these requirements.\n\n* Documentation: Maintain thorough documentation of data models, data flows, and data transformation processes.\n\n* Collaboration: Work closely with other members of the Data Team and cross-functional teams to support various data-related projects.\n\n* Quality Assurance: Implement and monitor data quality checks to ensure accuracy and reliability of data.\n\n* Cloud Technologies: While the focus is on data modeling, familiarity with cloud technologies and platforms (e.g., AWS) is a plus.\n\n\n\n\n\nWhat Youโll Bring\n\n\n* 3+ years of experience working within data engineering, analytics engineering and/or similar functions.\n\n* Experience collaborating with business stakeholders to build and support data projects.\n\n* Experience with database languages, indexing, and partitioning to handle large volumes of data and create optimized queries and databases.\n\n* Experience in file manipulation and organization, such as Parquet.\n\n* Experience with the "ETL/ELT as code" approach for building Data Marts and Data Warehouses.\n\n* Experience with cloud infrastructure and knowledge of solutions like Athena, Redshift Spectrum, and SageMaker.\n\n* Experience with Apache Airflow for creating DAGs and various purposes.\n\n* Critical thinking for evaluating contexts and making decisions about delivery formats that meet the companyโs needs (e.g., materialized views, etc.).\n\n* Knowledge in development languages, preferably Python or Spark.\n\n* Knowledge in SQL.\n\n* Knowledge of S3, Redshift, and PostgreSQL.\n\n* Experience in developing highly complex historical transformations. Utilization of events is a plus.\n\n* Experience with ETL orchestration and updates.\n\n* Experience with error and inconsistency alerts, including detailed root cause analysis, correction, and improvement proposals.\n\n* Experience with documentation and process creation.\n\n* Knowledge of data pipeline and LakeHouse technologies is a plus.\n\n\n\n\n\nWhat Youโll Bring \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Cloud and Engineer jobs that are similar:\n\n
$62,500 — $117,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSรฃo Paulo, Sรฃo Paulo, Brazil
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe currently seeking a Senior Data Engineer with 5-7 yearsโ experience. The ideal candidate would have the ability to work independently within an AGILE working environment and have experience working with cloud infrastructure leveraging tools such as Apache Airflow, Databricks, DBT and Snowflake. A familiarity with real-time data processing and AI implementation is advantageous. \n\n\n\nResponsibilities:\n* Design, build, and maintain scalable and robust data pipelines to support analytics and machine learning models, ensuring high data quality and reliability for both batch & real-time use cases.\n* Design, maintain, optimize data models and data structures in tooling such as Snowflake and Databricks. \n* Leverage Databricks for big data processing, ensuring efficient management of Spark jobs and seamless integration with other data services.\n* Utilize PySpark and/or Ray to build and scale distributed computing tasks, enhancing the performance of machine learning model training and inference processes.\n* Monitor, troubleshoot, and resolve issues within data pipelines and infrastructure, implementing best practices for data engineering and continuous improvement.\n* Diagrammatically document data engineering workflows. \n* Collaborate with other Data Engineers, Product Owners, Software Developers and Machine Learning Engineers to implement new product features by understanding their needs and delivery timeously. \n\n\n\nQualifications:\n* Minimum of 3 years experience deploying enterprise level scalable data engineering solutions.\n* Strong examples of independently developed data pipelines end-to-end, from problem formulation, raw data, to implementation, optimization, and result.\n* Proven track record of building and managing scalable cloud-based infrastructure on AWS (incl. S3, Dynamo DB, EMR). \n* Proven track record of implementing and managing of AI model lifecycle in a production environment.\n* Experience using Apache Airflow (or equivalent) , Snowflake, Lucene-based search engines.\n* Experience with Databricks (Delta format, Unity Catalog).\n* Advanced SQL and Python knowledge with associated coding experience.\n* Strong Experience with DevOps practices for continuous integration and continuous delivery (CI/CD).\n* Experience wrangling structured & unstructured file formats (Parquet, CSV, JSON).\n* Understanding and implementation of best practices within ETL end ELT processes.\n* Data Quality best practice implementation using Great Expectations.\n* Real-time data processing experience using Apache Kafka Experience (or equivalent) will be advantageous.\n* Work independently with minimal supervision.\n* Takes initiative and is action-focused.\n* Mentor and share knowledge with junior team members.\n* Collaborative with a strong ability to work in cross-functional teams.\n* Excellent communication skills with the ability to communicate with stakeholders across varying interest groups.\n* Fluency in spoken and written English.\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, DevOps, Cloud, Senior, Junior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nMemora Health works with leading healthcare organizations to make complex care journeys simple for patients and clinicians so that care is more accessible, actionable, and always-on. Our team is rapidly growing as we expand our programs to reach more health systems and patients, and we are excited to bring on a Senior Data Engineer. \n\nIn this role, you will have the responsibility of driving the architecture, design and development of our data warehouse and analytics solutions, alongside APIs that allow other internal teams to interact with our data. The ideal candidate will be able to collaborate effectively with Memoraโs Product Management, Engineering, QA, TechOps and business stakeholders.\n\nThis role will work closely with the cross-functional teams to understand customer pain points and identify, prioritize, and implement maintainable solutions. Ideal candidates will be driven not only by the problem we are solving but also by the innovative approach and technology that we are applying to healthcare - looking to make a significant impact on healthcare delivery. Weโre looking for someone with exceptional curiosity and enthusiasm for solving hard problems.\n\n Primary Responsibilities:\n\n\n* Collaborate with Technical Lead, fellow engineers, Product Managers, QA, and TechOps to develop, test, secure, iterate, and scale complex data infrastructure, data models, data pipelines, APIs and application backend functionality.\n\n* Work closely with cross-functional teams to understand customer pain points and identify, prioritize, and implement maintainable solutions\n\n* Promote product development best practices, supportability, and code quality, both through leading by example and through mentoring other software engineers\n\n* Manage and pare back technical debts and escalate to Technical Lead and Engineering Manager as needed\n\n* Establish best practices designing, building and maintaining data models.\n\n* Design and develop data models and transformation layers to support reporting, analytics and AI/ML capabilities.\n\n* Develop and maintain solutions to enable self-serve reporting and analytics.\n\n* Build robust, performant ETL/ELT data pipelines.\n\n* Develop data quality monitoring solutions to increase data quality standards and metrics accuracy.\n\n\n\n\nQualifications (Required):\n\n\n* 3+ years experience in shipping, maintaining, and supporting enterprise-grade software products\n\n* 3+ years of data warehousing / analytics engineering\n\n* 3+ years of data modeling experience\n\n* Disciplined in writing readable, testable, and supportable code in JavaScript, TypeScript, Node.js (Express), Python (Flask, Django, or FastAPI), or Java.\n\n* Expertise writing, and consuming RESTful APIs\n\n* Experience with relational or NoSQL databases (PostgreSQL, MySQL, MongoDB, Redis, etc.)\n\n* Experience with Data Warehouses (BigQuery, Snowflake, etc.)\n\n* Experience with analytical and reporting tools, such as Looker or Tableau\n\n* Inclination toward test-driven development and test automation\n\n* Experience with scrum methodology\n\n* Excels in mentoring junior engineers\n\n* B.S. in Computer Science or other quantitative fields or related work experience\n\n\n\n\nQualifications (Bonus):\n\n\n* Understanding of DevOps practices and technologies (Docker, Kubernetes, CI / CD, test coverage and automation, branch and release management)\n\n* Experience with security tooling in SDLC and Security by Design principles\n\n* Experience with observability and APM tooling (Sumo Logic, Splunk, Sentry, New Relic, Datadog, etc.)\n\n* Experience with an integration framework (Mirth Connect, Mule ESB, Apache Nifi, Boomi, etc..)\n\n* Experience with healthcare data interoperability frameworks (FHIR, HL7, CCDA, etc.)\n\n* Experience with healthcare data sources (EHRs, Claims, etc.)\n\n* Experience working at a startup\n\n\n\n\n\n\nWhat You Get:\n\n\n* An opportunity to work on a rapidly scaling care delivery platform, engaging thousands of patients and care team members and growing 2-3x annually\n\n* Enter a highly collaborative environment and work on the fun challenges of scaling a high-growth startup\n\n* Work alongside world-class clinical, operational, and technical teams to build and scale Memora\n\n* Shape how leading health systems and plans think about modernizing the care delivery experience for their patients and care teams\n\n* Improve the way care is delivered for hundreds of thousands of patients\n\n* Gain deep expertise about healthcare transformation and direct customer exposure with the countryโs most innovative health systems and plans\n\n* Ownership over your success and the ability to significantly impact the growth of our company\n\n* Competitive salary and equity compensation with benefits including health, dental, and vision coverage, flexible work hours, paid maternity/paternity leave, bi-annual retreats, Macbook, and a 401(k) plan\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, NoSQL, Senior, Engineer and Backend jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for So Energy and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nDescription\n\n\nthe We are changing things. We're So Energy, a fast-growing 100% renewable energy supplier in the UK. We're the leading energy supplier for customer service. We've won a host of awards too; including The Sunday Times Fast Track 100. Over the next 12 months, we are evolving into much more than just a utility provider and you could be a part of it.\n\n\nThe role\n\nWe are moving from producing several ad-hoc reports based on SQL queries to building a platform in GCP. It will be based on a data warehouse in Google Cloud Platform and with Power BI and Looker on top to provide the equivalent to a report as a service kind of functionalities.\n\nWe are looking for someone who has experience building analytics platform with tens of different source of data, someone that can leverage modern tools and programming languages to build ETL pipelines and extract intelligence out of the data.\n\nOur users (external but also internal) are extremely important to us. Understanding how their experience is throughout the various touchpoints is very important. Being able to apply data intelligence best practices to be able to understand and later forecast behaviour and trends is the main objective.\n\n\n\n* As a Data Engineer at SoEnergy, you will be responsible for building the ETL data pipelines\n\n* Strong familiarity and experience with ingestion, streaming and batch processing, data infrastructure design and data analytics\n\n* Experience running and supporting production of enterprise data platforms\n\n* Experience with relational and non-relational databases\n\n* Candidates must have some GCP knowledge and hands-on experience in Data Engineering space.\n\n* Build real-time & batch data processing pipelines to publish the data in GCP to match the evolving needs of the product and business\n\n* Providing technical support to internal teams within the organisation, and to external when required.\n\n* Coding elegant strategic solutions which are reusable and easily maintainable\n\n* Producing clear concise documentation where required.\n\n* Working closely with the Lead Data Engineer to apply the data strategy, ratifying before creating any ETL pipelines.\n\n* Pro-actively implementing solutions to understand better how internal and external users perform\n\n* Work closely with internal business teams gathering data requirements; translating them to technical artefacts and mapping them to Google Platform.\n\n\n\n\n\n\n\nRequirements\n\n\n\n* Hands on experience using Google Dataflow, GCS, cloud functions, BigQuery, DataProc, Apache Beam (Python) in designing data transformation rules for batch and data streaming\n\n* Experience in implementing Data & Analytics applications in Google Cloud Platform\n\n* Help to delivery GCP data development projects (including development/coding, testing and deployment into Production)\n\n* Provision data for analytics, data science and machine learning purposes.\n\n* Expertise in the designing of data solutions for BigQuery\n\n* Expertise in logical and physical data modelling\n\n* Solid Python programming skills and using apache beam (Python).\n\n* Experience building CI/CD pipelines and using pipeline tooling\n\n* Good communication skills, including the ability to translate technical descriptions into something that can be understood by a non-technical business-facing team member\n\n\n\n\n\n\n\nBenefits\n\n\n\n* Remote working available\n\n* Pension matching as part of auto-enrolment\n\n* 25 days holiday plus bank holidays with an extra day for your birthday!\n\n* Ongoing training and development\n\n* Cycle to work\n\n* Season Ticket Loan\n\n* An opportunity to work in a fast-changing changing industry for a leading disruptor in the field who is changing the face of the energy industry\n\n* Work in leafy Chiswick, with free breakfast, monthly drinks and a stunning new office space\n\n\n\n\nSo Energy care about helping the energy industry become a much more diverse and inclusive environment and we work hard to lead by example. We are committed to Equal Employment Opportunity and building an inclusive environment for all.#\n\n\nIf you are interested in finding out more please apply, make sure to complete all the questions to the best of your ability and attached an up to date version of your CV.\n\n\nGood luck! \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Cloud, Python and Apache jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Backtracks.fm and want to re-open this job? Use the edit link in the email when you posted the job!
Build delightful software for podcasts and spoken word audio. [Backtracks](https://backtracks.fm/?ref=stackoverflow) is seeking a qualified Senior Python Developer with some PEP to join our Product & Engineering Team.\n# Opportunity\n[Backtracks](https://backtracks.fm/?ref=stackoverflow) helps audio content creators and brands know and grow their audience and revenue. You will be responsible for building the Python-side of our web applications, platform, and APIs to deliver delightful experiences to our users and fellow developers.\n## Your day-to-day\n- Design an implement services and solutions in Python\n- Code and deploy new features in collaboration with our product and engineering teams\n- Be part of a small team, with a large amount of ownership and responsibility for managing things directly\n- Ship high-quality solutions with a sense of urgency and speed\n- Work closely with both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.\n- Have the freedom to suggest and drive initiatives\n- We expect you to be a tech-savvy professional, who is curious about new digital technologies and aspires to simple and elegantly Pythonic code.\n## You have\n**Experience working on design and development in any of the following roles:**\n- Python Developer\n- Python Engineer\n- Full Stack Developer\n- Full Stack Engineer\n- Product Developer\n- Product Engineer\n- Software Architect\n- BDFL\n**A strong knowledge of:**\n- Python 3.6+\n- HTML, CSS, JS, and Jinja2\n- SQLAlchemy (interfacing with PostgreSQL)\n- Distributed systems design and implementation\n- Messaging systems and technologies (e.g. RabbitMQ, Kafka, etc.)\n- Search (e.g. Elasticsearch)\n- Docker\n**Confidence or experience working with some or all of the following:**\n- Sanic, FastAPI, Apache Mod WSGI\n- Spark, Flink - Celery\n- VueJS\n- AWS, Google Cloud, Azure\n- CI/CD deployment processes\n**These qualities and traits:**\n- History of autonomous design decision making at technically focused companies\n- History of designing and building web components, products, and technology\n- Motivation and an enjoyment for a startup environment\n- Systems thinker (consider how components can scale across our platform and product offerings)\n- The ability to code and build out designs independently\n- A blend of product, system, and people knowledge that lets you jump into a fast paced environment and contribute from day one\n- An ability to balance a sense of urgency with shipping high quality and pragmatic solutions\n- Strong work sense of ownership\n- Strong collaboration and communication skills (fluency in English)\n- PMA (Positive Mental Attitude)\n- Bachelor's degree in Computer Science or relevant field and/or relevant work experience\n- 5+ years professional experience\n**Perhaps even have these additional qualities and traits:**\n- Passion for podcasts, radio, and spoken word audio\n- Passion for delivering high-quality software with quick turnaround times (e.g. you ship)\n- A product-first approach to building software\n- An enthusiasm for hard problems\n- Thrives in a fast-paced environment\n## Bonus points if you have experience with:\n- Analytics and/or advertising technology experience\n- TimescaleDB\n- AWS Batch\n- Druid\n- Drone, Jenkin, Github Actions, or CircleCI for CI/CD\n- Audio analysis and processing \n\nPlease mention the words **CLICK MAKE MAN** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4zNg==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Railnova and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nFront End Software Engineer (ReactJS)\nRemote job\nJob description\nRailnova is hiring an experienced front-end software engineer (Javascript/React) for our Railgenius software team to bring data analytics to railway end-users.\nThe Railgenius team is currently composed of a product manager, data scientist and back end engineers and leverages our UX/UI designer, infrastructure team and other product development teams at Railnova. We want to reinforce the Railgenius team and product with an experienced ReactJS developer to reinforce its product position as a stand-alone SaaS web product with a great user experience.\nOur customers are very engaged and never shy of feature suggestions, so you'll work with our UX/UI designer, product manager and support team to decide what to implement. You'll benefit from a lot of autonomy with a fast release cycle.\nReal examples of work the Railgenius team has done lately\n(That might help you to get a better idea of what this position job entails)\n\n\n* Implement a user-friendly interface for a complex event processing rule engine enabling our users to detect rolling stock failures in real time.\n\n* Build a powerful data inspector graphing tool to offer our clients a way to discover and graph multiple correlated signals in the browser.\n\n* Show the live, interpolated position (think Flightradar24 for trains) of trains along railway lines and custom map layers.\n\n* Optimize websockets bandwidth to cope with limited client browser capacity, while displaying hundreds of live sensors from a fleet of trains on a single page.\n\n* Design clever database models and API to express multi-tenant sharing of data and complex access permissions, to preserve privacy, security and intellectual property of each party in the data sharing process.\n\n* Talk directly to the customers to understand the desirability and the user fit of what is being built.\n\n* Recently, we started to use Figma front-end features to facilitate communication between UX designers, product managers and front end developers, and Storybook to reuse front-end components.\n\n\n\n\nExamples of what surrounding team members have done lately\n(The Railgenius team is multidisciplinary team as you can see)\n\n\n* Data scientists trained a physical model on 24 month of historic data spanning hundreds of GB on batteries to provide a predictor of battery health while train assets are parked, writing their own software and integrating it in the pipeline and the user front end.\n\n* Data scientists forecasted future usage of train locomotives by extracting past seasonality in our fine grained historical data, to better predict maintenance dates.\n\n* Data engineers optimised heavy SQL queries and indexes to offer great response time for time series querying and pattern search to our end-users.\n\n* Data engineers migrated our real-time complex event processing framework from a homemade Python base to Apache Kafka to help absorb peak traffic and increase availability.\n\n* The infrastructure team migrated most of our applications from bare metal servers to the AWS cloud in a few months in order to offer more reliability and improve the life the engineering team.\n\n\n \nRequirements\n\n\n* You are passionate about making an awesome product for end users.\n\n* You have a degree in computer science/engineering or any equivalent proven track record.\n\n* You are an experienced Javascript / ReactJS developer with familiarity with responsive design.\n\n* You can think critically about a UX design from your programmer perspective and have a good feel for usability and aesthetics\n\n* You have experience with back-end APIs, Python and SQL.\n\n* You are a good (written) communicator, you like working in a team, and speak to customers.\n\n\n\n\nWhat we offer\nWe want you to continue your personal development journey at Railnova. You'll be given space and time for deep focus on your work and be exposed to a technical and caring team and be given the opportunity to perfect your software engineering skills. On top of that, you'll get:\n\n\n* A choice of being either a full remote position (in Europe), or partial remote, or full time in our offices near Brussels South Train Station (when sanitary conditions allow for it). Railnova has a remote culture (we are big fans and users of Basecamp) with a few full time employees remote since day one.\n\n* 32 days of paid holidays.\n\n* Space to grow through deep focus on your work, one conference per year of your choice, extra courses and self-learning.\n\n* A young, multidisciplinary and dynamic team in a medium sized scale-up (~30 employees), with a rock-solid, subscription based business model in IoT and Data Analytics.\n\n* A large collection of perks including a smartphone, laptop of your choice, an extra healthcare insurance, transport card and (depending on need) company car.\n\n* An open culture and nurture creativity, while keeping our clients and the rest of the team in mind at all times.\n\n* A balanced work environment (work from home, flexible working hours, no meetings, no emails).\n\n* Meal vouchers.\n\n\n\n\nHow to apply\nPlease apply via the online application form and carefully fill in the 3 write-up questions to demonstrate that you are a good English written communicator and experienced JavaScript/ReactJS programmer. We will review your written submission within 2 weeks and let you know if you are invited to an interview. The recruiting process might also include an exercise down the line.\nAgency calls are not appreciated.\nPI126504447 \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Front End, Developer, Digital Nomad, English, JavaScript, Cloud, Python, API, SaaS and Apache jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Cisco Meraki and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nPosition: Sr. or Team Lead Machine Learning Engineer, Full Time\n\nLocation: Fully Remote anywhere in the USA or optional office location in SF\n\nAt Cisco Meraki, we know that technology can connect, empower, and drive us. Our mission is to simplify technology so our customers can focus on what's most meaningful to them: their students, patients, customers, and businesses. We’re making networking easier, faster, and smarter with technology that simply works.\n\nAs a Machine Learning engineer on the Insight team, you will collaborate with firmware and full stack engineers to design, plan, and build customer-facing analytics tools. Meraki's cloud-managed model offers a unique opportunity to draw upon data from millions of networks across our wide ranging customer base. The goal is to use the rich telemetry data available from these networks and combine it with the power of machine learning and the cloud to build an analytics engine that can provide intuitive, yet detailed insights into the performance of customer networks.\n\nWhat you can expect:\n\n\n* Build a system that ingests real-time streams of network performance data and identifies network performance degradation, optimizing for both low latency and few false positives\n\n* Design models that predict network performance for customers to help them understand their network performance issues\n\n* Work with firmware and backend engineers to design uplink selection algorithm for SD-WAN\n\n* Collaborate with full stack engineers to make intuitive data visualizations and integrate predictions seamlessly and powerfully into the user experience\n\n* Build, maintain, and monitor data pipelines and infrastructure for training and deploying models\n\n\n\n\nWhat we're ideally looking for:\n\n\n* 5+ years of relevant industry experience\n\n* Advanced training in mathematics, statistics, and modeling\n\n* Experience programming in Python AND some other programming language like scala, golang, ruby, etc.\n\n* Experience working with algorithms and building models for supervised and unsupervised learning.\n\n* Experience using data processing and ML libraries such as Pandas, Scikit-Learn, Tensorflow, Keras, etc.\n\n* Experience working with distributed computing engines like Apache Spark, etc. and real time data streaming services like Amazon Kinesis.\n\n* Experience implementing and monitoring data pipelines.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Machine Learning, Engineer, Executive, Amazon, Cloud, Python, Apache and Backend jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Bombora and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nBombora provides a global B2B intent platform powered by the world’s largest publisher data co-op. Our data allows sales teams to base their actions on the knowledge of which companies are in-market for their products, and empowers marketing teams to practice #SustainableMarketing.\nWe process billions of content interactions daily to detect intent signals from companies around the world. To do this, we use modern distributed processing and machine learning technologies, including Spark, Dataflow/Beam, Airflow, PyTorch, and BigQuery.\n\nThis is a full-time position with competitive benefits and compensation, preferably based in our Reno, NV office - though we are open to remote candidates as long as you are willing to travel to Reno occasionally.\n\nWhat you will do\n\nYou will join our Data Engineering team, working alongside our data scientists and ML engineers to support Bombora R&D’s mission to design, develop and maintain our world class B2B DaaS products, leveraging machine intelligence and web-content consumption data at-scale. You will do this by:\n\n\n* Creating and refining bounded (batch) and unbounded (streaming) ETL and ML data pipelines that comprise our production systems\n\n* Advancing development and integration of our major analytics and ML codebases using modern and rigorous software engineering principles\n\n* Helping to support and maintain our live production ETL and ML pipelines and systems\n\n* Mentoring and advancing the development of your colleagues\n\n* Having fun in an environment of collaboration, curiosity, and experimentation\n\n\n\n\nSpecific Responsibilities:\n\n\n* Develop applications, libraries and workflows with Python, Java, Apache Spark, Apache Beam, and Apache Airflow\n\n* Design and implement systems that run at scale on Google’s Dataproc, Dataflow, Kubernetes Engine, Pub/Sub, and BigQuery platforms.\n\n* Learn, design and implement algorithms and machine learning operations, at-scale, using SciPy, PySpark, Spark Streaming, and MLBase libraries.\n\n* Learn and advance existing data models for our events, profiles and other datasets\n\n* Employ test-driven development, performance benchmarking, rapid release schedule, and continuous integration.\n\n* Participate in daily stand-ups, story planning, reviews, retrospectives, and the occasional outings to nearby local cuisine and / or culture.\n\n\n\n\nAbout you:\n- Your background:\n\n\n* Education: B.S. / M.S. in computer science, physics, electrical engineering, applied mathematics, or equivalent experience.\n\n* Work experience: 3+ years of real-world development experience and 2+ years of experience with cloud and/or big-data platforms, GCP experience preferred.\n\n* Language Fluency: In Java / Python (at least 2 years of experience on commercial projects) and perhaps a few other languages.\n\n* Data wrangler: Driven by data and the ability to leverage data to understand systems.\n\n* Impactful and effective: Live and breathe TDD and agile methodologies in software to great impact\n\n* Character: Creativity, pragmatism, curiosity, and a good sense of humor\n\n\n\n\n- Working knowledge of:\n\n\n* Algorithms / Data Structures: Design patterns, efficiency, using the right abstraction for the task.\n\n* Functional Programming: Filters and maps, currying and partial evaluation, group-by and reduce-by\n\n* OOP: Object paradigms to build components, when needed.\n\n* Databases: Familiar with both relational (MySQL, PostgreSQL) and NoSQL (HBase, Cassandra, etc).\n\n* Data Processing at scale: Distributed computations, map-reduce paradigm, and streaming processing, Spark experience is helpful.\n\n* Build and release toolchains: Experience deploying projects in both Python (conda, setuptools, etc.) and Java (Maven).\n\n* Git: Comfortable working with git (resolving conflicts, merging, branching, etc).\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Java, Cloud, NoSQL, Git, Python, Travel, Marketing, Sales and Apache jobs that are similar:\n\n
$70,000 — $112,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Xapo and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nBALANCE FOR BETTER At Xapo, we embrace our differences and actively foster an inclusive environment where we all can thrive. We’re a flexible, family-friendly environment, and we recognize that everyone has commitments outside of work. We have a goal of reaching gender parity and strongly encourage women to apply to our open positions. Diversity is not a tagline at Xapo; it is our foundation. RESPONSIBILITIES\n\n\n\n* Design and build data structures on MPP platform like AWS RedShift and or Druid.io.\n\n* Design and build highly scalable data pipelines using AWS tools like Glue (Spark based), Data Pipeline, Lambda.\n\n* Translate complex business requirements into scalable technical solutions.\n\n* Strong understanding of analytics needs.\n\n* Collaborate with the team on building dashboards, using Self-Service tools like Apache Superset or Tableau, and data analysis to support business.\n\n* Collaborate with multiple cross-functional teams and work on solutions which have a larger impact on Xapo business.\n\n\n\n\nREQUIREMENTS\n\n\n* In-depth understanding of data structures and algorithms.\n\n* Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data.\n\n* Experience in designing and developing ETL data pipelines.\n\n* Proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs.\n\n* Programming experience in building high-quality software. Skills with Python or Scala preferred.\n\n* Strong analytical and communication skills.\n\n\n\n\nNICE TO HAVE SKILLS\n\n\n* Work/project experience with big data and advanced programming languages.\n\n* Experience using Java, Spark, Hive, Oozie, Kafka, and Map Reduce.\n\n* Work experience with AWS tools to process data (Glue, Pipeline, Kinesis, Lambda, etc).\n\n* Experience with or advanced courses on data science and machine learning.\n\n\n\n\nOTHER REQUIREMENTS\n\nA dedicated workspace. A reliable internet connection with the fastest speed possible in your area.Devices and other essential equipment that meet minimal technical specifications.Alignment with Our Values.\n\n\nWHY WORK FOR XAPO? \n\nShape the Future: Improve lives through cutting-edge technology, work remotely from anywhere in the world\n\nOwn Your Success: Receive attractive remuneration, enjoy an autonomous work culture and flexible hours, apply your expertise to meaningful work every day\n\nExpect Excellence: Collaborate, learn, and grow with a high-performance team. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Python, Scala and Apache jobs that are similar:\n\n
$80,000 — $125,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Dealer Inspire and want to re-open this job? Use the edit link in the email when you posted the job!
\nThis position can be remote, but US based candidates only.\n\nAbout Us:\n\nDealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting **** done every day, together.\n\nDI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today!\n\nWant to learn more about who we are? Check us out here!\n\nJob Description: \nDealer Inspire is changing the way car dealerships do business through data. We are assembling a team of engineers and data scientists to help build the next generation distributed computing platform to support data driven analytics and predictive modeling.\n\nWe are looking for a Data Engineer to join the team and play a critical role in the design and implementing of sophisticated data pipelines and real time analytics streams that serve as the foundation of our data science platform. Candidates should have the following qualifications\n\nRequired Experience\n\n\n* 2-5 years experience as a data engineer in a professional setting\n\n* Knowledge of the ETL process and patterns of periodic and real time data pipelines\n\n* Experience with data types and data transfer between platforms\n\n* Proficiency with Python and related libraries to support the ETL process\n\n* Working knowledge of SQL\n\n* Experience with linux based systems console (bash, etc.)\n\n* Knowledge of cloud based AWS resources such as EC2, S3, and RDS\n\n* Able to work closely with data scientists on the demand side\n\n* Able to work closely with domain experts and data source owners on the supply side\n\n* An ability to build a data pipeline monitoring system with robust, scalable dashboards and alerts for 24/7 operations.\n\n\n\n\nPreferred Experience\n\n\n* College degree in a technical area (Computer Science, Information Technology, Mathematics or Statistics) \n\n* Experience with Apache Kafka, Spark, Ignite and/or other big data tools \n\n* Experience with Java Script, Node.js, PHP and other web technologies.\n\n* Working knowledge of Java or Scala\n\n* Familiarity with tools such as Packer, Terraform, and CloudFormation \n\n\n\n\nWhat we are looking for in a candidate:\n\n\n* Experience with data engineering, Python and SQL\n\n* Willingness to learn new technologies and a whatever-it-takes attitude towards building the best possible data science platform\n\n* A person who loves data and all things data related, AKA a self described data geek\n\n* Enthusiasm and a “get it done” attitude!\n\n\n\n\nPerks:\n\n\n* Health Insurance with BCBS, Delta Dental (Orthodontics coverage available), Eye Med Vision\n\n* 401k plan with company match\n\n* Tuition Reimbursement\n\n* 13 days paid time off, parental leave, and selected paid holidays\n\n* Life and Disability Insurance\n\n* Subsidized gym membership\n\n* Subsidized internet access for your home\n\n* Peer-to-Peer Bonus program\n\n* Work from home Fridays\n\n* Weekly in-office yoga classes\n\n* Fully stocked kitchen and refrigerator\n\n\n\n\n*Not a complete, detailed list. Benefits have terms and requirements before employees are eligible. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Java, Cloud, PHP, Python, Marketing, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Thrive Global and want to re-open this job? Use the edit link in the email when you posted the job!
\n Thrive Global is changing the way people live through our behavior change platform and apps used to impact individual and organizational well-being and productivity. The marriage of data and analytics, our best-in-class content and science-backed behavior change IP will help people go from knowing what to do to actually doing it, enabling millions of consumers to begin the Thrive behavior change journey. As a technical lead on Thrive’s Data Science and Analytics team, you will play a significant role in building Thrive’s platform and products.\n\nWho We Are Looking For\n\n\n* A versatile engineering lead who has significant experience with data at all levels of maturity: raw telemetry through deployment and maintenance of models in operations \n\n* Is excited about collaborating with others, engineering and non-engineering, both learning & teaching as Thrive grows.\n\n* An innovator looking to push the boundaries of automation, intelligent ETL, AIOps and MlOps to drive high-quality insights and operational efficiency within the team\n\n* Has a proven track record of building and shipping data-centric software products\n\n* Desires a position that is approximately 75% individual technical contributions and 25% mentoring junior engineers or serving as a trusted advisor to engineering leadership\n\n* Is comfortable in a high growth, start-up environment and is willing to wear many hats and come up with creative solutions.\n\n\n\n\nHow You’ll Contribute\n\n\n* Collaborate with the Head of Data Science and Analytics to design an architecture and infrastructure to support data engineering and machine learning at Thrive\n\n* Implement a production-grade data science platform which includes building data pipeline, automation of data quality assessments, and automatic deployment of models into production\n\n* Develop new technology solutions to ensure a seamless transition of machine learning algorithms to production software, to enable the building out of easy to use datasets and to reduce other friction points within the data science life-cycle\n\n* Assist with building a small but skilled interdisciplinary team of data professionals: scientists, analysts, and engineers\n\n* Consider user privacy and security at all times\n\n\n\n\nRequired Skills\n\n\n* Master’s or Ph.D. degree in Computer Science or a related discipline (e.g., Mathematics, Physics)\n\n* 3+ years of technical leadership, team size of 5 or more, in data engineering or machine learning projects\n\n* 8+ years of industry experience with data engineering and machine learning\n\n* Extensive programming experience in Java or Python with applications in data engineering and machine learning.\n\n* Experience with data modeling, large-scale batch, and real-time data processing, ETL design, implementation and maintenance\n\n* Excellent verbal and written communication skills\n\n* Self-starter with a positive attitude, intellectual curiosity and a passion for analytics and solving real-world problems\n\n\n\n\nRelevant Technology and Tools Experience\nA good cross-section of experience in the following areas is desired:\n\n\n* AI/ML platforms: TensorFlow, Apache MXnet, Theano, Keras, CNTK, scikit-learn, H2O, Spark MLlib, AWS SageMaker, etc.\n\n* Relational databases: MySQL, Postgres, RedShift, etc.\n\n* Big data technologies: Spark, HDFS, Hive, Yarn, etc.\n\n* Data ingestion tools: Kafka, NiFi, Storm, Amazon Kinesis, etc.\n\n* Deployment technologies: Docker, Kubernetes, or OpenStack\n\n* Public Cloud: Azure, AWS or Google Cloud Platform\n\n\n\n\nOur Mission\nThrive Global’s mission is to end the stress and burnout epidemic by offering companies and individuals sustainable, science-based solutions to enhance well-being, performance, and purpose, and create a healthier relationship with technology. Recent science has shown that the pervasive belief that burnout is the price we must pay for success is a delusion. We know, instead, that when we prioritize our well-being, our decision-making, creativity, and productivity improve dramatically. Thrive Global is committed to accelerating the culture shift that allows people to reclaim their lives and move from merely surviving to thriving.\n\nWhat We Offer\n\n\n* A mission-driven company that’s truly making a difference in the lives of people around the world \n\n* Ability to develop within the company and shape our growth strategy\n\n* Human-centric culture with a range of wellness perks and benefits\n\n* Competitive compensation package\n\n* Medical, vision and dental coverage + 401k program with company match\n\n* Generous paid time-off programs \n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Machine Learning, Engineer, Executive, Teaching, Amazon, Java, Cloud, Python, Junior and Apache jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Elastic and want to re-open this job? Use the edit link in the email when you posted the job!
\nAt Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.\n\nElastic’s Cloud product allows users to build new clusters or expand existing ones easily. This product is built on Docker-based orchestration system to easily deploy and manage multiple Elastic Clusters.\n\nWhat You Will Do:\n\n\n* Be the owner of data quality for our KPI and analytics data. Ensure that we're getting the data we need and that it's as error-free as possible.\n\n* Troubleshoot problems with data and reports.\n\n* Work cross-functionally with product managers, analysts, and engineering teams to extract meaningful data from multiple sources\n\n* Develop test strategies, create test plans, execute test cases both manually and then create automation to reduce regressions\n\n* Be the point person for making sure that the raw data we use for our KPIs flows into our reporting systems.\n\n* Test Elasticsearch models, queries and Kibana Visualizations. Use outlier detection and statistical methods to define and monitor valid data ranges\n\n* Understand the Cloud business model and work with analysts to build presentations for executives and product owners\n\n* Generate reports and/or data dumps based on requirements provided by stakeholders. Contribute to automation of these reports.\n\n* Grow and share your interest in technical outreach (blog posts, tech papers, conference speaking, etc.)\n\n\n\n\nWhat You Bring Along\n\n\n* You are passionate about software that delivers quality data to stakeholders\n\n* Experience testing models for SaaS KPIs such as user churn, trial conversion, etc.\n\n* Experience with scripting languages such as Python, Ruby, Bash\n\n* Experience with Jupyter and Python libraries like Pandas and Numpy is a plus\n\n* Ability to write queries for Elasticsearch and relational data stores such as Postgres DB\n\n* Experience creating test plans for complex data analysis\n\n* Basic understanding of statistics and data modeling\n\n* A self-starter who has experience working across multiple technical teams and decision, makers\n\n* You love working with a diverse, worldwide team in a distributed work environment\n\n\n\n\nAdditional Information:\n\n\n* Competitive pay and benefits\n\n* Equity\n\n* Catered lunches, snacks, and beverages in most offices\n\n* An environment in which you can balance great work with a great life\n\n* Passionate people building great products\n\n* Employees with a wide variety of interests\n\n* Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.\n\n\n\n\nElastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.\n\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Cloud, Engineer, Elasticsearch, Python, SaaS and Apache jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Selerity and want to re-open this job? Use the edit link in the email when you posted the job!
\nSummary:\n\nWe are looking for a Senior DevOps (Site Reliability) Engineer to join Selerity’s team, scaling up an A.I. driven analytics and recommendation platform and integrating it into enterprise workflows. Highly competitive compensation plus significant opportunities for professional growth and career advancement.\n\nEmployment Type: Contract or Full-time\n\nLocation is flexible: We have offices in New York City and Oak Park, Illinois (Chicago suburb) but about half of our team currently works remotely from various parts of Europe, North America, and Asia. \n\n\nJob Description:\n\nWant to change how the world engages with chat, research, social media, news, and data?\n\nSelerity has dominated ultra-low-latency data science in finance for almost a decade. Now our real-time content analytics and contextual recommendation platform is gaining broader traction in enterprise and media applications. We're tackling big challenges in predictive analytics, conversational interfaces, and workflow automation and need your help!\n\nWe’re looking for an experienced DevOps (Site Reliability) Engineer to join a major initiative at a critical point in our company’s growth. The majority of Selerity’s applications are developed in Java and C++ on Linux but knowledge of other languages (especially Python and JavaScript), platforms and levels of the stack is very helpful.\n\n\n\nMust-haves:\n\n * Possess a rock-solid background in Computer Science (minimum BS in Comp Sci or related field) + at least 5 years (ideally 10+) of challenging work experience.\n\n * Implementation of DevOps / SRE processes at scale including continuous integration (preferred: Jenkins), automated testing, and platform monitoring (preferred: JMX, Icinga, Grafana, Graphite).\n\n * Demonstrated proficiency building and modifying Java and C++ applications in Linux environments (using Git, SVN). \n\n * Significant operations expertise with the Ansible (preferred), Chef, or Puppet deployment automation system in a Cloud environment.\n\n * Direct experience in the design, implementation, and maintenance of SaaS APIs that are minimal, efficient, scalable, and supportable throughout their lifecycle (OpenLDAP).\n\n * Solid track record of making effective design decisions balancing near-term and long-term objectives.\n\n * Know when to use commercial or open-source solutions, when to delegate to a teammate, and when to roll up your sleeves and code it yourself.\n\n * Work effectively in agile teams with remote members; get stuff done with minimal guidance and zero BS, help others, and know when to ask for help.\n\n * Clearly communicate complex technical and product issues to non-technical team members, managers, clients, etc. \n\n\n\nNice-to-haves:\n\n * Proficiency with Cisco, Juniper, and other major network hardware platforms, as well as ISO layer 1 and 2 protocols.\n\n * Experience with Internet routing protocols such as BGP.\n\n * Implementation of software defined networking or other non-traditional networking paradigms.\n\n * Proficiency with SSL, TLS, PGP, and other standard crypto protocols and systems.\n\n * Full-stack development and operations experience with web apps on Node.js.\n\n * Experience with analytics visualization libraries.\n\n * Experience with large-scale analytics and machine learning technologies including TensorFlow/Sonnet, Torch, Caffe, Spark, Hadoop, cuDNN, etc.\n\n * Conversant with relational, column, object, and graph database fundamentals and strong practical experience in any of those paradigms.\n\n * Deep understanding of how to build software agents and conversational workflows.\n\n * Experience with additional modern programming languages (Python, Scala, …)\n\n\n\nOur stack:\n\n * Java, C++, Python, JavaScript/ECMAscript + Node, Angular, RequireJS, Electron, Scala, etc.\n\n * A variety of open source and in-house frameworks for natural language processing and machine learning including artificial neural networks / deep learning.\n\n * Hybrid of AWS (EC2, S3, RDS, R53) + dedicated datacenter network, server and GPU/coprocessor infrastructure.\n\n * Cassandra, Aurora plus in-house streaming analytics pipeline (similar to Apache Flink) and indexing/query engine (similar to ElasticSearch).\n\n * In-house messaging frameworks for low-latency (sub-microsecond sensitivity) multicast and global-scale TCP (similarities to protobufs/FixFast/zeromq/itch).\n\n * Ansible, Git, Subversion, PagerDuty, Icinga, Grafana, Observium, LDAP, Jenkins, Maven, Purify, VisualVM, Wireshark, Eclipse, Intellij.\n\nThis position offers a great opportunity to work with advanced technologies, collaborate with a top-notch, global team, and disrupt a highly visible, multi-billion-dollar market. \n\n\n\nCompensation:\n\nWe understand how to attract and retain the best talent and offer a competitive mix of salary, benefits and equity. We also understand how important it is for you to feel challenged, to have opportunities to learn new things, to have the flexibility to balance your work and personal life, and to know that your work has impact in the real world.\n\nWe have team members on four continents and we're adept at making remote workers feel like part of the team. If you join our NYC main office be sure to bring your Nerf toys, your drones and your maker gear - we’re into that stuff, too.\n\n\nInterview Process:\n\nIf you can see yourself at Selerity, send your resume and/or online profile (e.g. LinkedIn) to [email protected]. We’ll arrange a short introductory phone call and if it sounds like there’s a match we'll arrange for you to meet the team for a full interview. \n\nThe interview process lasts several hours and is sometimes split across two days on site, or about two weeks with remote interviews. It is intended to be challenging - but the developers you meet and the topics you’ll be asked to explain (and code!) should give you a clear sense of what it would be like to work at Selerity. \n\nWe value different perspectives and have built a team that reflects that diversity while maintaining the highest standards of excellence. You can rest assured that we welcome talented engineers regardless of their age, gender, sexual orientation, religion, ethnicity or national origin.\n\n\n\nRecruiters: Please note that we are not currently accepting referrals from recruiters for this position. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Crypto, Finance, Java, Cloud, Python, SaaS, Engineer, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Lynx Analytics and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nJob Title: Data Engineer\n\nLocation: Europe\n\nAbout the company\n\nFounded in 2010, Lynx Analytics is a predictive analytics company run by world-class quantitative marketing scientists and industry experienced data scientists. Our focus is on solving big challenges primarily through the deployment of our proprietary big graph analytics platform, which is based on a powerful Hadoop and Apache Spark back-end.\n\n\nOur head office is in Singapore and we have operations in Hong Kong, USA, Hungary, Indonesia and presence in a few other southeast Asian countries. Our teams' qualifications are global, including Professors, PhDs, MScs, and MBAs from Ivy League, INSEAD and NUS, as well as experience at blue-chip tech and analytics companies, and other start-ups. It is the combination of our insightful frameworks, product development capabilities, and these qualifications, which are the true value of Lynx's proposition.\n\n\nLynx Analytics technology is already deployed across the region for various clients, and there is significant growth potential for the company as well as numerous career opportunities as we scale up our operations.\n\n\nJob description\n\nLynx Analytics is looking for a Software Engineer to work on automating and productizing advanced big data transformation and analytics pipelines. You would be working with standard big data technologies (Hadoop, Spark, etc) as well as our proprietary big graph analysis framework. The ideal candidate would be someone with onsite B2B experience while dealing with enterprise technology deployment. He/she should be able to liaise with internal technology teams and work in a multi-stakeholder environment.\n\nRequirements\n\n\n* Strong programming skills\n\n* Solid knowledge of Python and Java\n\n* Good understanding of the Linux operating system including basic sysadmin and shell scripting abilities\n\n* SQL\n\n* Experience in project delivery in a B2B setting\n\n* Good problem solving skills\n\n* Fluency in English\n\n* Willingness to travel\n\n\n\n\n\n\nDesirable:\n\n\n* Experience in Big Data\n\n* Experience in data science or analytics\n\n* Industry experience in working for a big enterprise (like our clients)\n\n\n\n\nWhat we offer:\n\n\n* Opportunity to work on creating innovative, bleeding edge data science pipelines using our state of the art, in-house built big graph tool\n\n* Work closely with the developers of the (big graph) tool you will be building upon\n\n* Be a member of a very strong team with mathematicians, ex-Googlers, Ivy League professors, INSEAD alumni business people and telco industry experts\n\n* Startup atmosphere\n\n* Competitive salary\n\n* Stock options for employees\n\n* Opportunity to travel (South East Asia, US, Europe, ...)\n\n* Flexible working hours, family-friendly workplace\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Python, Travel, Marketing, Stats, Apache and Linux jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Stacktical and want to re-open this job? Use the edit link in the email when you posted the job!
Stacktical is a Predictive Scalability Testing platform.\nIt ensures our customers design and ship softwares that always scale to the maximum of their ability and with minimum footprint.\nThe Stacktical Site Reliability Engineer is responsible for helping our customers engineer CI/CD pipeline around system testing practices that involve Stacktical.\nLike the rest of the team, they also actively participate in building the Stacktical platform itself.\nWe are looking for a skilled DevOps and Site Reliability Engineer, expert in Scalability, that is excited about the vision of using Predictive Analytics and AI to reinvent the field.\nWith a long-standing passion for automating your work and the work of others, you also understand how Software as a Service is increasingly empowering companies to do just that.\nYou can justify previous experiences in startups and youโre capable of working remotely, with great efficiency, in fast-paced, demanding environments. Ideally, youโd have a proven track record of working remotely for 2+ years.\nNeedless to say, you fully embrace the working philosophy of digital nomadism weโre developing at Stacktical and both the benefits and responsibilities that come with it.\nYour role and responsibilities includes the following :\n- Architecture, implementation and maintenance of server clusters, API and microservices, including critical production environments, in Cloud and other hosting configurations (dedicated, vps and shared).\n- Ensure the availability, performance and scalability of applications in respect of proven design and architecture best practices.\n- Design and execute Scalability strategies that ensure the scalability and the elasticity of the infrastructure.\n- Manage a portfolio of Softwares, their Development Life Cycle and optimize their Continuous Integration and Delivery workflows (CI/CD).\n- Automate the Quality & Reliability Testing of applications (Unit Tests, Integration Tests, E2E Tests, Performance and Scalability Tests).\n## Skills we are looking for\n- A 50-50 mix between Software Development and System Administration experience\n- Proficiency in Node.js, Python, R, Erlang (Elixir) and / or Go\n- Hands on experience in NoSQL / SQL database optimization (slow queries indexing, sharding, clustering)\n- Hands on experience in administering high availability and high performance environments, as well as managing large-scale deployments of traffic-heavy applications.\n- Extensive knowledge of Cloud Computing concepts, technologies and providers (Amazon AWS, Google Cloud Platform, Microsoft Azureโฆ).\n- A strong ability to design and execute cutting edge System Testing strategies (smoke tests, performance/load tests, regression tests, capacity tests).\n- Excellent understanding of Scalability processes and techniques.\n- Good grasp of Scalability, Elasticity concepts and creative Auto Scaling strategies (Auto Scaling Groups management, API-based scheduling).\n- Hands on experience with Docker and Docker orchestration tools like Kubernetes and their corresponding provider management services (Amazon ECS, Google Container Engine, Azure Container Service...).\n- Hands on experience with leading Infrastructure as Code SCM tools like Terraform and Ansible\n- Proven ability to work remotely with teams of various sizes in same/different timezones, from anywhere and still remain highly motivated, productive, and organized.\n- Excellent English communication skills, including verbal, written, and presentation. Great email and Instant Messaging (Slack) proficiency.\nWeโre looking for a self learner always willing to step out her/his comfort zone to become better. An upright individual, ready to write the first and many chapters of the Stacktical story with us.\n## Life at our virtual office\nOur headquarters are in Paris but our offices and our clients are everywhere in the World.\nWeโre a fully distributed company with a 100% remote workforce. So pretty much everything happens on Slack and various other collaborative tools.\n## Remote work at Stacktical\nRemote work at Stacktical requires you to forge a contract with the Stacktical company, using your own billing structure.\nThat means you would either need to own a company or leverage a compatible legal status.\nLabour laws can be largely different from a country to another and we are not (yet) in a position to comply with the local requirements of all our employees.\nJust because you will be a contractor doesnโt make you less of a fully-fledged employee of Stacktical. In fact, even our founders are contractors too.\n## Compensation Package\n#### Fixed-price contract\nYour contract fixed-price is engineered around your expectations, our possibilities and the overall implications of remote work.\nLetโs have a transparent chat about it.\n#### Stock Options\nYes, joining Stacktical means you are entrusted to own part of the company. \n\nPlease mention the words **VEHICLE ORBIT AUNT** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4zNg==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, JavaScript, Cloud, Erlang, Python, Node, API, Admin, Engineer, Apache, Nginx, Sys Admin, Docker, English, NoSQL, Microsoft and Legal jobs that are similar:\n\n
$70,000 — $120,000/year\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.