\nAbout Sayari: \nSayari is the counterparty and supply chain risk intelligence provider trusted by government agencies, multinational corporations, and financial institutions. Its intuitive network analysis platform surfaces hidden risk through integrated corporate ownership, supply chain, trade transaction and risk intelligence data from over 250 jurisdictions. Sayari is headquartered in Washington, D.C., and its solutions are used by thousands of frontline analysts in over 35 countries.\n\n\nOur company culture is defined by a dedication to our mission of using open data to enhance visibility into global commercial and financial networks, a passion for finding novel approaches to complex problems, and an understanding that diverse perspectives create optimal outcomes. We embrace cross-team collaboration, encourage training and learning opportunities, and reward initiative and innovation. If you like working with supportive, high-performing, and curious teams, Sayari is the place for you.\n\n\nJob Description:\nSayariโs flagship product, Sayari Graph, provides instant access to structured business information from billions of corporate, legal, and trade records. As a member of Sayari's data team you will work with the Product and Software Engineering teams to collect data from around the globe, maintain existing data pipelines, and develop new pipelines that power Sayari Graph. \n\n\n\nJob Responsibilities:\n* Write and deploy crawling scripts to collect source data from the web\n* Write and run data transformers in Scala Spark to standardize bulk data sets\n* Write and run modules in Python to parse entity references and relationships from source data\n* Diagnose and fix bugs reported by internal and external users\n* Analyze and report on internal datasets to answer questions and inform feature work\n* Work collaboratively on and across a team of engineers using agile principles\n* Give and receive feedback through code reviews \n\n\n\nSkills & Experience:\n* Professional experience with Python and a JVM language (e.g., Scala)\n* 2+ years of experience designing and maintaining data pipelines\n* Experience using Apache Spark and Apache Airflow\n* Experience with SQL and NoSQL databases (e.g., columns stores, graph, etc.)\n* Experience working on a cloud platform like GCP, AWS, or Azure\n* Experience working collaboratively with Git\n* Understanding of Docker/Kubernetes\n* Interest in learning from and mentoring team members\n* Experience supporting and working with cross-functional teams in a dynamic environment\n* Passionate about open source development and innovative technology\n* Experience working with BI tools like BigQuery and Superset is a plus\n* Understanding of knowledge graphs is a plus\n\n\n\n\n$100,000 - $125,000 a yearThe target base salary for this position is $100,000 - $125,000 USD plus bonus. Final offer amounts are determined by multiple factors including location, local market variances, candidate experience and expertise, internal peer equity, and may vary from the amounts listed above.\n\nBenefits: \nยท Limitless growth and learning opportunities\nยท A collaborative and positive culture - your team will be as smart and driven as you\nยท A strong commitment to diversity, equity & inclusion\nยท Exceedingly generous vacation leave, parental leave, floating holidays, flexible schedule, & other remarkable benefits\nยท Outstanding competitive compensation & commission package\nยท Comprehensive family-friendly health benefits, including full healthcare coverage plans, commuter benefits, & 401K matching\n \nSayari is an equal opportunity employer and strongly encourages diverse candidates to apply. We believe diversity and inclusion mean our team members should reflect the diversity of the United States. No employee or applicant will face discrimination or harassment based on race, color, ethnicity, religion, age, gender, gender identity or expression, sexual orientation, disability status, veteran status, genetics, or political affiliation. We strongly encourage applicants of all backgrounds to apply. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Cloud and Engineer jobs that are similar:\n\n
$50,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nCAPCO POLAND \n\n*We are looking for Poland based candidate. The job is remote but may require some business trips.\n\nJoining Capco means joining an organisation that is committed to an inclusive working environment where youโre encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. Itโs important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table โ so weโd love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help.\n\nCapco Poland is a global technology and management consultancy specializing in driving digital transformation across the financial services industry. We are passionate about helping our clients succeed in an ever-changing industry.\n\nWe also are experts in focused on development, automation, innovation, and long-term projects in financial services. In Capco, you can code, write, create, and live at your maximum capabilities without getting dull, tired, or foggy.\n\nWe're seeking a skilled Mid Big Data Engineer to join our Team. The ideal candidate will be responsible for designing, implementing and maintaining scalable data pipelines and solutions on on-prem / migration / cloud projects for large scale data processing and analytics.\n\n \n\nTHINGS YOU WILL DO\n\n\n* Design, develop and maintain robust data pipelines using Scala or Python, Spark, Hadoop, SQL for batch and streaming data processing\n\n* Collaborate with cross-functional teams to understand data requirements and design efficient solutions that meet business needs \n\n* Optimize Spark jobs and data processing workflows for performance, scalability and reliability\n\n* Ensure data quality, integrity and security throughout the data lifecycle\n\n* Troubleshoot and resolve data pipeline issues in a timely manner to minimize downtime and impact on business operations\n\n* Stay updated on industry best practices, emerging technologies, and trends in big data processing and analytics\n\n* Document, design specifications, deployment procedures and operational guidelines for data pipelines and systems\n\n* Provide technical guidance and mentorship for new joiners\n\n\n\n\n \n\nTECH STACK: Python or Scala, OOP, Spark, SQL, Hadoop\n\nNice to have: GCP, Pub/Sub, Big Query, Kafka, Juniper, Apache NiFi, Hive, Impala, Cloudera, CI/CD\n\n \n\nSKILLS & EXPERIENCES YOU NEED TO GET THE JOB DONE\n\n\n* min. 3-4 years of experience as a Data Engineer/Big Data Engineer\n\n* University degree in computer science, mathematics, natural sciences, or similar field and relevant working experience\n\n* Excellent SQL skills, including advanced concepts\n\n* Very good programming skills in Python or Scala\n\n* Experience in Spark and Hadoop\n\n* Experience in OOP\n\n* Experience using agile frameworks like Scrum\n\n* Interest in financial services and markets\n\n* Nice to have: experience or knowledge with GCP \n\n* Fluent English communication and presentation skills\n\n* Sense of humor and positive attitude\n\n\n\n\n \n\nWHY JOIN CAPCO?\n\n\n* Employment contract and/or Business to Business - whichever you prefer\n\n* Possibility to work remotely\n\n* Speaking English on daily basis, mainly in contact with foreign stakeholders and peers\n\n* Multiple employee benefits packages (MyBenefit Cafeteria, private medical care, life-insurance)\n\n* Access to 3.000+ Business Courses Platform (Udemy)\n\n* Access to required IT equipment\n\n* Paid Referral Program\n\n* Participation in charity events e.g. Szlachetna Paczka\n\n* Ongoing learning opportunities to help you acquire new skills or deepen existing expertise\n\n* Being part of the core squad focused on the growth of the Polish business unit\n\n* A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients\n\n* A work culture focused on innovation and creating lasting value for our clients and employees\n\n\n\n\n \n\nONLINE RECRUITMENT PROCESS STEPS*\n\n\n* Screening call with the Recruiter\n\n* Technical interview: first stage\n\n* Client Interview\n\n* Feedback/Offer\n\n\n\n\n*The recruitment process may be modified \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Recruiter, Cloud, Senior and Engineer jobs that are similar:\n\n
$57,500 — $92,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nKrakรณw, Lesser Poland Voivodeship, Poland
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nAirDNA began with a big dream in a balmy California garage in 2015. Since then, the technology startup has grown into the leading provider of data and business intelligence for the billion-dollar travel and vacation rental industryโwith offices in Denver and Barcelona. \n\n\nOur self-serve platform eliminates guesswork and equips Airbnb hosts with smart and competitive insights needed to succeed in the ever-evolving short-term rental landscape. \n\n\nWe also arm enterprise clients with customized reports and in-depth dashboards to ensure they can scale and invest strategically. These customers include hundreds of top financial institutions, real estate companies, vacation rental managers, and destination marketing organizations around the world. \n\n\nWe track the daily performance of over 10 million Airbnb and Vrbo properties across 120,000 global markets. We also collect data from over a million partner properties. This marriage of scraped and source data, enhanced by our proprietary algorithms, makes our solutions the most accurate and comprehensive in the world. \n\n\nWeโre firm believers that data isnโt the destination; itโs the starting point. The launchpad. The bedrock for any future-forward business.\n\n\nThe AirDNA Team\nAt AirDNA, weโre a team of data nerds on a mission to empower our customers to smartly grow their businesses. And we're looking for people with a broad range of experiences and perspectives, who are excited by our mission, values, and drive to change the short-term rental market. In short: Life is never boring here. And we genuinely live and breathe our company values: Happy, Hungry and Honest. People who are ready to exemplify these values are especially encouraged to apply. We invite you to apply even if you are unsure about whether you meet every single requirement in this posting. We carefully consider every application, not just those that check off all the boxes.\n\n\nThe Role\nAirDNA is looking for a Senior Data Engineer to help us build a solid and scalable data infrastructure and analytical environment. The ideal candidate enjoys data wrangling, optimizing data pipelines, and building solutions for data collections and analysis in parallelized and scaled out environments. The AirDNA infrastructure currently processes billions of data points and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity.\n\n\n\nHere's what you'll get to do\n* Build and extend data pipelines, written in Scala and running on Spark\n* Tune Spark job performance \n* Write clean, performant Scala code\n* Work closely with our Product, Data Science and other Engineering teams to craft new products\n* Contribute to Python and Scala-based microservices running on kubernetes\n\n\n\nHere's what you'll need to be successful\n* 5 or more years of experience in a Data Engineer role\n* BS/BA in Computer Science\n* Experience developing cloud based pipelines\n* Strong experience with Spark\n* Experience with Scala, and exposure to Python\n* Experience with developing and deploying cloud-based data solutions\n* Experience in any of the following is considered to be an asset: streaming technology such as Kafka or Google Pub/Sub, Databricks, AWS / GCP, Postgres, Protobuf, Snowflake or other Data Warehousing solutions, and Test Driven Development \n* Ability to operate with strong ownership and independence\n\n\n\nTech stack\n* Scala\n* Python\n* Spark\n* Databricks\n* AWS EMR, Athena, RDS (Postgres), Redshift, and S3\n\n\n\nJob perks\n* Competitive cash compensation and benefits, the salary range for this position is โฌ100,000 - โฌ120,000 per year. \n* Eligible for Companyโs annual discretionary bonus program\n* 36 holidays per year\n* Continuing education stipend\n* Talented international team and a vibrant work environment\n* Bottom-up management: we listen to your ideas and implement them\n* Team building events\n* Dog friendly office!\n\n\n\n\n\nAirDNA seeks to attract the best-qualified candidates who support the mission, vision and values of the company and those who respect and promote excellence through diversity. We are committed to providing equal employment opportunities (EEO) to all employees and applicants without regard to race, color, creed, religion, sex, age, national origin, citizenship, sexual orientation, gender identity and expression, physical or mental disability, marital, familial or parental status, genetic information, military status, veteran status or any other legally protected classification. The company complies with all applicable state and local laws governing nondiscrimination in employment and prohibits unlawful harassment based on any of the aforementioned protected classes at every location in which the company operates. This applies to all terms, conditions and privileges of employment including but not limited to: hiring, assessments, probation, placement, benefits, promotion, demotion, termination, layoff, recall, transfer, leave of absence, compensation, training and development, social and recreational programs, education assistance and retirement. \n\n\nWe are committed to making our application process and workplace accessible for individuals with disabilities. Upon request, AirDNA will reasonably accommodate applicants so they can participate in the application process unless doing so would create an undue hardship to AirDNA or a threat to these individuals, others in the workplace or the company as a whole. To request accommodation, please email [email protected]. Please allow for 24 hours to process your request. \n\n\nBy applying for the above position, you will confirm that you have reviewed and agreed to our Data Privacy Notice for Applicants. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Travel, Education, Cloud, Senior, Marketing and Engineer jobs that are similar:\n\n
$70,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nEU
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.