This job post is closed and the position is probably filled. Please do not apply. Work for Bombora and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nBombora provides a global B2B intent platform powered by the world’s largest publisher data co-op. Our data allows sales teams to base their actions on the knowledge of which companies are in-market for their products, and empowers marketing teams to practice #SustainableMarketing.\nWe process billions of content interactions daily to detect intent signals from companies around the world. To do this, we use modern distributed processing and machine learning technologies, including Spark, Dataflow/Beam, Airflow, PyTorch, and BigQuery.\n\nThis is a full-time position with competitive benefits and compensation, preferably based in our Reno, NV office - though we are open to remote candidates as long as you are willing to travel to Reno occasionally.\n\nWhat you will do\n\nYou will join our Data Engineering team, working alongside our data scientists and ML engineers to support Bombora R&D’s mission to design, develop and maintain our world class B2B DaaS products, leveraging machine intelligence and web-content consumption data at-scale. You will do this by:\n\n\n* Creating and refining bounded (batch) and unbounded (streaming) ETL and ML data pipelines that comprise our production systems\n\n* Advancing development and integration of our major analytics and ML codebases using modern and rigorous software engineering principles\n\n* Helping to support and maintain our live production ETL and ML pipelines and systems\n\n* Mentoring and advancing the development of your colleagues\n\n* Having fun in an environment of collaboration, curiosity, and experimentation\n\n\n\n\nSpecific Responsibilities:\n\n\n* Develop applications, libraries and workflows with Python, Java, Apache Spark, Apache Beam, and Apache Airflow\n\n* Design and implement systems that run at scale on Google’s Dataproc, Dataflow, Kubernetes Engine, Pub/Sub, and BigQuery platforms.\n\n* Learn, design and implement algorithms and machine learning operations, at-scale, using SciPy, PySpark, Spark Streaming, and MLBase libraries.\n\n* Learn and advance existing data models for our events, profiles and other datasets\n\n* Employ test-driven development, performance benchmarking, rapid release schedule, and continuous integration.\n\n* Participate in daily stand-ups, story planning, reviews, retrospectives, and the occasional outings to nearby local cuisine and / or culture.\n\n\n\n\nAbout you:\n- Your background:\n\n\n* Education: B.S. / M.S. in computer science, physics, electrical engineering, applied mathematics, or equivalent experience.\n\n* Work experience: 3+ years of real-world development experience and 2+ years of experience with cloud and/or big-data platforms, GCP experience preferred.\n\n* Language Fluency: In Java / Python (at least 2 years of experience on commercial projects) and perhaps a few other languages.\n\n* Data wrangler: Driven by data and the ability to leverage data to understand systems.\n\n* Impactful and effective: Live and breathe TDD and agile methodologies in software to great impact\n\n* Character: Creativity, pragmatism, curiosity, and a good sense of humor\n\n\n\n\n- Working knowledge of:\n\n\n* Algorithms / Data Structures: Design patterns, efficiency, using the right abstraction for the task.\n\n* Functional Programming: Filters and maps, currying and partial evaluation, group-by and reduce-by\n\n* OOP: Object paradigms to build components, when needed.\n\n* Databases: Familiar with both relational (MySQL, PostgreSQL) and NoSQL (HBase, Cassandra, etc).\n\n* Data Processing at scale: Distributed computations, map-reduce paradigm, and streaming processing, Spark experience is helpful.\n\n* Build and release toolchains: Experience deploying projects in both Python (conda, setuptools, etc.) and Java (Maven).\n\n* Git: Comfortable working with git (resolving conflicts, merging, branching, etc).\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Java, Cloud, NoSQL, Git, Python, Travel, Marketing, Sales and Apache jobs that are similar:\n\n
$70,000 — $112,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Dealer Inspire and want to re-open this job? Use the edit link in the email when you posted the job!
\nThis position can be remote, but US based candidates only.\n\nAbout Us:\n\nDealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting **** done every day, together.\n\nDI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today!\n\nWant to learn more about who we are? Check us out here!\n\nJob Description: \nDealer Inspire is changing the way car dealerships do business through data. We are assembling a team of engineers and data scientists to help build the next generation distributed computing platform to support data driven analytics and predictive modeling.\n\nWe are looking for a Data Engineer to join the team and play a critical role in the design and implementing of sophisticated data pipelines and real time analytics streams that serve as the foundation of our data science platform. Candidates should have the following qualifications\n\nRequired Experience\n\n\n* 2-5 years experience as a data engineer in a professional setting\n\n* Knowledge of the ETL process and patterns of periodic and real time data pipelines\n\n* Experience with data types and data transfer between platforms\n\n* Proficiency with Python and related libraries to support the ETL process\n\n* Working knowledge of SQL\n\n* Experience with linux based systems console (bash, etc.)\n\n* Knowledge of cloud based AWS resources such as EC2, S3, and RDS\n\n* Able to work closely with data scientists on the demand side\n\n* Able to work closely with domain experts and data source owners on the supply side\n\n* An ability to build a data pipeline monitoring system with robust, scalable dashboards and alerts for 24/7 operations.\n\n\n\n\nPreferred Experience\n\n\n* College degree in a technical area (Computer Science, Information Technology, Mathematics or Statistics) \n\n* Experience with Apache Kafka, Spark, Ignite and/or other big data tools \n\n* Experience with Java Script, Node.js, PHP and other web technologies.\n\n* Working knowledge of Java or Scala\n\n* Familiarity with tools such as Packer, Terraform, and CloudFormation \n\n\n\n\nWhat we are looking for in a candidate:\n\n\n* Experience with data engineering, Python and SQL\n\n* Willingness to learn new technologies and a whatever-it-takes attitude towards building the best possible data science platform\n\n* A person who loves data and all things data related, AKA a self described data geek\n\n* Enthusiasm and a “get it done” attitude!\n\n\n\n\nPerks:\n\n\n* Health Insurance with BCBS, Delta Dental (Orthodontics coverage available), Eye Med Vision\n\n* 401k plan with company match\n\n* Tuition Reimbursement\n\n* 13 days paid time off, parental leave, and selected paid holidays\n\n* Life and Disability Insurance\n\n* Subsidized gym membership\n\n* Subsidized internet access for your home\n\n* Peer-to-Peer Bonus program\n\n* Work from home Fridays\n\n* Weekly in-office yoga classes\n\n* Fully stocked kitchen and refrigerator\n\n\n\n\n*Not a complete, detailed list. Benefits have terms and requirements before employees are eligible. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Java, Cloud, PHP, Python, Marketing, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Lynx Analytics and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nJob Title: Data Engineer\n\nLocation: Europe\n\nAbout the company\n\nFounded in 2010, Lynx Analytics is a predictive analytics company run by world-class quantitative marketing scientists and industry experienced data scientists. Our focus is on solving big challenges primarily through the deployment of our proprietary big graph analytics platform, which is based on a powerful Hadoop and Apache Spark back-end.\n\n\nOur head office is in Singapore and we have operations in Hong Kong, USA, Hungary, Indonesia and presence in a few other southeast Asian countries. Our teams' qualifications are global, including Professors, PhDs, MScs, and MBAs from Ivy League, INSEAD and NUS, as well as experience at blue-chip tech and analytics companies, and other start-ups. It is the combination of our insightful frameworks, product development capabilities, and these qualifications, which are the true value of Lynx's proposition.\n\n\nLynx Analytics technology is already deployed across the region for various clients, and there is significant growth potential for the company as well as numerous career opportunities as we scale up our operations.\n\n\nJob description\n\nLynx Analytics is looking for a Software Engineer to work on automating and productizing advanced big data transformation and analytics pipelines. You would be working with standard big data technologies (Hadoop, Spark, etc) as well as our proprietary big graph analysis framework. The ideal candidate would be someone with onsite B2B experience while dealing with enterprise technology deployment. He/she should be able to liaise with internal technology teams and work in a multi-stakeholder environment.\n\nRequirements\n\n\n* Strong programming skills\n\n* Solid knowledge of Python and Java\n\n* Good understanding of the Linux operating system including basic sysadmin and shell scripting abilities\n\n* SQL\n\n* Experience in project delivery in a B2B setting\n\n* Good problem solving skills\n\n* Fluency in English\n\n* Willingness to travel\n\n\n\n\n\n\nDesirable:\n\n\n* Experience in Big Data\n\n* Experience in data science or analytics\n\n* Industry experience in working for a big enterprise (like our clients)\n\n\n\n\nWhat we offer:\n\n\n* Opportunity to work on creating innovative, bleeding edge data science pipelines using our state of the art, in-house built big graph tool\n\n* Work closely with the developers of the (big graph) tool you will be building upon\n\n* Be a member of a very strong team with mathematicians, ex-Googlers, Ivy League professors, INSEAD alumni business people and telco industry experts\n\n* Startup atmosphere\n\n* Competitive salary\n\n* Stock options for employees\n\n* Opportunity to travel (South East Asia, US, Europe, ...)\n\n* Flexible working hours, family-friendly workplace\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Python, Travel, Marketing, Stats, Apache and Linux jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.