This job post is closed and the position is probably filled. Please do not apply. Work for Craft and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 2 years ago
\nAbout the Company\n\nCraft.co is a supplier intelligence company helping organizations accelerate data-informed business decisions. Our unique, proprietary data platform tracks thousands of real-time signals across millions of companies globally, delivering best in class monitoring and insight into global supply chains, among other company cohorts. Our clients, including Fortune 100 companies, government and military agencies, SMEs, asset management groups, and others, use our technology for supply chain intelligence, market intelligence and related use cases. Through our modular, secure, customizable portal, our clients can monitor any company they are working with and drive critical actions in real-time.\n\nWe are a well-funded technology company with leading investors from Silicon Valley and elsewhere, but are not your typical data or SaaS startup. Our CEO is a seasoned entrepreneur and Juilliard-trained cellist. The Craft team is globally distributed with headquarters in San Francisco and an office in London. We fully support and encourage remote workers, and have team members across North and South America and Europe. We are looking for innovative and driven people who are passionate about building delightful software to join our rapidly growing team!\n\nA Note to Candidates\n\nWe are an equal opportunity employer who values and encourages diversity, equity and belonging at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.\n\nHiring will be done through IdeasX on behalf of Craft.co.\n\nAbout the Role\n\nCraft is looking for an experienced and motivated Senior Data Engineer to join a team responsible for a key product within the organization. As a technical leader you will have a great say in how solutions are engineered and delivered, and will help mentor engineers on the team.\n\nCraft gives engineers a lot of responsibility, which is matched by our investment in their growth and development. We are growing quickly, and the only limits to your future growth with Craft are your interests and abilities.\n\nIn This Role You Will\n\n\n* \n\nBuilding and optimizing data pipelines (batch & streaming) for big data systems.\n\n\n* \n\nExtracting, Analyzing and Modeling of rich & diverse data sets.\n\n\n* Applying Data mining to techniques such as anomaly detection, clustering, regression, classification, summarization to extract value from our data sets.\n\n* \n\nDesigning software that is easily testable and maintainable\n\n\n* \n\nEnsuring data analysis is in line with company policies and regulations.\n\n\n* \n\nStrong problem solving and the ability to communicate ideas effectively.\n\n\n* \n\nProvide expertise on the overall data engineering best practices, standards, architectural approaches and complex technical resolutions\n\n\n* \n\nWork on extendable data processing system that allows to add and scale pipelines with low-code approach\n\n\n\n\n\nWhat Weโre Looking For\n\n\n* Professional fluency in English, both written and spoken is required\n\n* We are looking for people that show curiosity through asking questions, digging into new technologies, and always trying to grow.\n\n* 5+ years of experience in Data Engineering\n\n* 5+ years of experience with Python\n\n* Knowledge and experience of Amazon Web Services (AWS).\n\n* Experience using Terraform or other infrastructure-as-code tools.\n\n* Self-starter, independent, likes to take initiative.\n\n* Have fundamental and deep knowledge of data engineering techniques: ETL (Batch & Streaming), Data Warehouse, Data Lakes, MPP\n\n* Data analytical and statistical background would be a large plus\n\n\n\n\nOur current technology stack:\n\n\n* AWS Services: S3, Batch, Athena, Lambda, Kinesis, Kafka, Redshift, RDS / Aurora Postgres, DynamoDB\n\n* Data Pipelines: Python, SQL, Pandas, Dask, aws-data-wrangler, Papermill, Airflow, Prefect\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to SaaS, Amazon, Senior and Engineer jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐
We hire old (and young)\n\n
\n\n#Location\nCali, Valle del Cauca, Colombia