This job post is closed and the position is probably filled. Please do not apply. Work for Sift and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 1 year ago
\nThe Data Platform team is responsible for making Siftโs data accessible across a variety of users and use-cases. This team ensures the availability, correctness, and data privacy/ compliance of information critical for Siftโs day-to-day operations. Our customers include not just external clients but also Siftโs data science product teams, our sales, business support services and operations teams. We are super excited about our plans to build our next generation data analytics solution as we approach a phase where we start diving into reporting/visualization and real time accessibility to data across Sift.\n\nWhat youโll do:\n\nAs a Staff engineer on Siftโs Data Platform team, you will build data warehousing and business intelligence systems to empower our customers, engineers, data scientists and analysts to extract insights from our data. You will design and build Petabyte scale systems for high availability, high throughput, data consistency, security, and end user privacy, defining our next generation of data analytics tooling. You will do data modeling and ETL enhancements to improve efficiency and data quality. Youโd enforce best practices on data governance to ensure compliance and data truncation/deletion responsibly. Youโd also have the opportunity to work with console reporting frameworks and build accessible dashboards for both monitoring as well as reporting. A strong staff engineer would also mentor other engineers and promote data engineering best practices across the team and the broader organization.\n\nWhat would make you a strong fit:\n\n\n2+ years of data modeling experience (Kimball, Imnon or Linstedt)\n\nExperience writing and optimizing complex ETL pipelines across multiple environments (Dataproc, Notebooks, Snowflake ELT.)\n\nExperience programming (SQL, Java, Python) and/or utilizing reporting tools (Looker, Tableau, Qlikview, PowerBI) \n\nExperience designing and building data warehouse, data lake or lake house solutions\n\nExperience with distributed systems and distributed data storage.\n\nExperience with large scale data warehousing solutions, like BigQuery, Snowflake, Redshift, Presto, etc.\n\nExperience working with real time streaming frameworks like Kafka / Kinesis / Spark / Flink\n\nExperience with data modeling, starting with API design through reporting solutions against it.\n\nStrong communication and collaboration skills particularly across teams or with functions like data scientists or business analysts.\n\n\n\n\nBonus points:\n\n\nPrior experience building and maintaining enterprise analytics environments, including exposure to sales, finance and marketing audiences. \n\nExperience with Python, Java, or similar OOPS languages\n\nExperience with cloud infrastructure (e.g. GCP, AWS)\n\nExperience with workflow orchestrators such as Airflow or Cloud Composer\n\nExperience with the analytics presentation layer (Dashboards, Reporting, and OLAP)\n\nExperience with designing for data compliance and privacy\n\n\n\n\nA little about us:\n\nSift is the leading innovator in Digital Trust & Safety. Hundreds of disruptive, forward-thinking companies like Zillow, and Twitter trust Sift to deliver outstanding customer experience while preventing fraud and abuse.\n\nThe Sift engine powers Digital Trust & Safety by helping companies stop fraud before it happens. But itโs not just another anti-fraud platform: Sift enables businesses to tailor experiences to each customer according to the risk they pose. That means fraudsters experience friction, but honest users do not. By drawing on insights from our global network of customers, Sift allows businesses to scale, win, and thrive in the digital era.\n\nBenefits and perks:\n\n\nCompetitive total compensation package\n\n401k plan\n\nMedical, dental and vision coverage\n\nWellness reimbursement\n\nEducation reimbursement\n\nFlexible time off\n\n\n\n\nSift is an equal opportunity employer. We make better decisions as a business when we can harness diversity in our experience, data, and background. Sift is working toward building a team that represents the worldwide customers that we serve, inclusive of people from all walks of life who can bring their full selves to work every day.\n\nThis document provides transparency around the way in which Sift handles personal data of job applicants: https://sift.com/recruitment-privacy \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Analyst, Non Tech, SaaS, Accounting, Payroll, Education, Finance, Mobile, Senior, Excel, Legal, Design, Testing, Cloud, API, Backend, Shopify, Digital Nomad, Sales, Marketing and Engineer jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
Remote Data Science Undergraduate Internship Summer 2023
Internship Overview
The Home Depot's Summer Internship program offers college students an opportunity to develop leadership skills and gain hands on experience in a corporate environment. During an 11 - week period from May 15 - July 28, 2023, interns will be assigned to a functional team such as Supply Chain, Marketing, e-commerce, Technology, Finance, Operations, Merchandising, Outside Sales & Services, Human Resources, etc. Interns will learn more about our retail business and our corporate offices while having the opportunity to work on a pre-assigned project that impacts the function they are supporting. Additionally, interns participate in networking and development activities that set them up for success as they build their careers.
Data Science Intern Description
The Data Science Intern Program offers talented college students the opportunity to develop their advanced data science skills while supporting the Company's strategic objectives. Intern candidates are assigned to a project aligned to business areas such as e-Commerce, Merchandising, Operations or Finance. The Home Depot's internship program was recently named in the Top 20 in the US and offers college students an opportunity to develop leadership skills and gain hands on experience working with a number of leaders on projects that directly impact the business for one of the world's leading retailers. Data Science interns will focus on working with a variety of data science roles and functions to translate business questions into actionable insights and deliver high quality analytical solutions.
Tasks, Responsibilities, And Key Accountabilities Include
Business Collaboration
Participate in meetings across the enterprise data science community, gaining exposure to cross functional business units.
Build networking relationships and receive mentoring from team members and top-level management
Communicating Results
Communicate findings and project status clearly and professionally through presentations
Provide recommendations to upper management.
Provide comprehensive report-out to senior leaders on assignments and other related projects
Data Analytics
Use strategic thinking and perform data analytics for a variety of business problems and opportunities and create high quality analytics solutions
Apply a wide variety of database applications and analytical tools, including SQL, Google Cloud Big Query and Python
Description of Roles: (Career paths that utilize this skillset full-time)
Role
After the Internship, here are some examples of early career roles for interns with a background in Data Science
At The Home Depot, our associates always have room to move up and explore new opportunities.
Data Science Analyst
Associate Data Scientist
Data Scientist
Nature And Scope
Typically reports to Manager, Information Technology
No direct responsibility for supervising others.
Environment
Environmental Job Requirements:
Located in a comfortable indoor area. Any unpleasant conditions would be infrequent and not objectionable.
Travel
Typically requires overnight travel less than 10% of the time.
Standard Minimum Qualifications
Must be eighteen years of age or older.
Must be legally permitted to work in the United States.
Education Required
The knowledge, skills and abilities typically acquired through the completion of a high school diploma and/or GED.
Years Of Relevant Work Experience
0 years
Physical Requirements
Most of the time is spent sitting in a comfortable position and there is frequent opportunity to move about. On rare occasions there may be a need to move or lift light articles.
Preferred Qualifications
Working knowledge of Microsoft Office Suite
Working knowledge of Tableau
Working knowledge of presentation software (e.g., Microsoft PowerPoint)
Currently pursuing a Master's degree in a quantitative field (Analytics, Finance, Information Systems, etc.)
Excellent academic performance
Experience in a modern scripting language (preferably Python)
Experience running queries against data (preferably with Google BigQuery or SQL)
Experience with data visualization software (preferably Tableau)
Exposure to statistics, predictive modeling and other data science methodologies
Knowledge, Skills, Abilities And Competencies
Ability to communicate issues and recommend solutions in a timely manner.
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Fintech in Berlin and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nElinvarenables asset & wealth managers to digitalize their business models by providing the required Platform as a Service. This includes applications for core processes as well as third party connections. Our partnersand customers like the Fondsdepotbank, Donner &ReuschelAG, and M.M. Warburg & Co.profit from the combination of leading technology and the optimal regulatory setup as we hold all necessaryBaFinlicenses. With this comprehensive approach, we create the unique opportunity for our partners to go digital in one step and to utilize state-of-the-art analytics to create individualized solutions to the benefit of their clients.\n\nAs a Senior Software Engineer (2nd Level Response & Enablement) (m/f/div), you will join the Technical Operations team. Your key focus would be in resolving operational issues, monitoring the system, and contributing to long-term improvement solutions. As you will be working across the whole platform, this role gives you a great opportunity to understand the modern software system architectures and its challenges, learn different tools and business processes around Wealth and Asset management.\n\n\nIn order to succeed in this challenging position, you would need to be able to work under pressure, be accurate in handling highly sensitive data and attentive to details, work independently, seek information, and communicate in a clear and active manner.\n\n\nWhat will keep you challenged?\n\n\n* Investigate incidents and issues on the platform\n\n* Take ownership and resolve operational issues according to priorities within defined SLOs\n\n* Monitor and analyze logs for errors, possible problems and misbehaviors\n\n* Participate in development of internal monitoring, alerting, deployment, and orchestration tools\n\n* Write scripts to automate repetitive and time-consuming operations\n\n* Document the technical procedures and contribute to knowledge base\n\n* Support software and configuration deployments\n\n* Mentor other team members\n\n\n\n\nWhat are we looking for?\n\n\n* 5+ years of experience in relevant positions or domain\n\n* Excellent overall understanding of cloud-based systems, modern software architectures and principles\n\n* Great troubleshooting and debugging skills\n\n* Experience working with:\n\n\n\n\n* Java\n\n* AWS or other cloud environment\n\n* Relational databases (SQL)\n\n* Linux\n\n\n\n\n\n\n\n* Considered as additional plus:\n\n\n\n\n* experience working as TechOps, DevOps, SRE, second level support specialist\n\n* experience supporting platforms, financial background\n\n* experience using Python, React or Spring\n\n\n\n\n\n\n\n\n\n\n\n* We use the following tools on a regular basis: Kafkacat, kubectl, Jira, Confluence, GitLab, Kibana, Grafana, Sentry, OpsGenie, ArgoCD, PgAdmin/DBeaver, Postman/curl. While knowledge of any of those tools is not mandatory (you will learn them), prior experience or ability to explain its purpose would add a couple of extra points to your profile\n\n\n\n\nWhat will keep you happy?\n\n\n* An outstanding, highly motivated and international teamthatvalues a positive and open working environment- and a group of people who genuinelyappreciateand supporteach other\n\n* An inspiring momentumtoreshapethe wealth management industry by replacing legacy IT with a modern, sustainable ITplatform\n\n* Everything you need to excel in your profession,backed by some of the world’s most recognizedinvestors\n\n* Enjoy an open corporate culturewithout dress code, with flexible working hoursand remote officeoptions\n\n* Beautiful loft-style office,situated in bustlingPrenzlbergjust a few minutes fromAlexanderplatz – a lunch and afterwork Dorado at yourdoorsteps\n\n* An employerthatwelcomes diversityand actively promotes equal opportunities on every level\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Senior, Engineer, Developer, Digital Nomad, React, Finance, Cloud and Excel jobs that are similar:\n\n
$65,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Platform.sh and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nMission:\n\nAs a Senior Financial Operations Engineer in Platform.sh you will work inside our Operations Infrastructure team and report directly to its VP. You will also work in partnership with other departments, most notably finance and engineering. \n\nThis role is accountable for developing a robust program for cloud cost management that includes but is not limited to managing reservations, tagging framework and governance model, budgets and change management, cost optimization lifecycle, and show back dashboarding.\n\nYou will work directly with our external service providers while collaborating with our engineering, product, and finance teams to provide deeper meaning to cost and usage data across our product.\n\nYou are the trusted communications bridge between teams leveraging infrastructure and all teams that require information about its usage, notably helping finance quantify the cost impacts of technical decisions and helping engineering implement cost best practices throughout their stack.\n\nYou will provide the tools to develop and measure cost-related insights, metrics, and KPIs to help all departments involved better understand how to build our products more efficiently.\n\nThe right candidate has deep expertise in managing the cost and utilization of large-scale digital infrastructure, from commercial cloud providers to on-prem physical data centers. \n\nYou will work remotely on a full-time basis as long as you are based anywhere within the EMEA timezone.\n\n\nYour main mission encompasses but is not limited to: \n\n\n* Build automated reporting tools to analyze cloud resources usage. \n\n* Define resource tagging best practices and control their application.\n\n* Create, document, and monitor reservation dashboards and processes for each IaaS provider supported.\n\n* Define and monitor events, and usage patterns, translating infrastructure data into insightful information. Present recommendations to various stakeholders.\n\n* Design cost alerts (with each cloud provider) to identify and monitor cost increases.\n\n* Participate in planning and designing of new features and products, helping other teams understand infrastructure from a cost perspective.\n\n* Identify and extract actionable tasks and processes that optimize the cost of Platform.sh architecture in regard to infrastructure.\n\n* Plan, coordinate and implement cost reduction efforts with engineering, customers’ success, and other teams. \n\n* Help the finance team in receiving fresh and correct billing and operational data, insights, and analysis.\n\n* Provide cost projections to Finance in order to get approvals on large expense commitments and cash outflows.\n\n\n\n\nTo be successful in this role, you need to:\n\n\n* Thrive and work well with others in a fast-paced and demanding environment. You have excellent communication and interpersonal skills. \n\n* Fully understand the architecture of Platform.sh products and their relation to underlying IaaS providers.\n\n* Be able to report in an organized and readable manner.\n\n* Be very organized and fearless in keeping projects on time. \n\n\n\n\nQualifications:\n\n\n* Degree in computer science, or equivalent technical familiarity in large-scale infrastructure management.\n\n* Minimum 3+ years experience in large scale cloud cost optimization. \n\n* Ability to build enterprise-level infrastructure plans, both near-term operational and forward conceptual.\n\n* Experience in the business application of advanced analytics and data modeling tools.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Finance, Senior, Engineer, Ops and Cloud jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Selerity and want to re-open this job? Use the edit link in the email when you posted the job!
\nSummary:\n\nWe are looking for a Senior Java DevOps Engineer to join Selerity’s team, scaling up an A.I. driven analytics and recommendation platform and integrating it into enterprise workflows. Highly competitive compensation plus significant opportunities for professional growth and career advancement.\n\nEmployment Type: Contract or Full-time\n\nLocation is flexible: We have offices in New York City and Oak Park, Illinois (Chicago suburb) but about half of our team currently works remotely from various parts of Europe, North America, and Asia. \n\nJob Description:\n\nWant to change how the world engages with chat, research, social media, news, and data?\n\nSelerity has dominated ultra-low-latency data science in finance for almost a decade. Now our real-time content analytics and contextual recommendation platform is gaining broader traction in enterprise and media applications. We're tackling big challenges in predictive analytics, conversational interfaces, and workflow automation and need your help!\n\nWe’re looking for an experienced DevOps Engineer to join a major initiative at a critical point in our company’s growth, assisting in the architecture, development, and maintenance of our stack. The majority of Selerity’s applications are developed in Java and C++ on Linux but knowledge of other languages (especially Python, JavaScript, and Scala), platforms and levels of the stack is very helpful.\n\nMust-haves:\n\n * Possess a rock-solid background in Computer Science (minimum BS in Comp Sci or related field) + at least 5 years (ideally 10+) of challenging work experience.\n\n * Implementation of DevOps / SRE processes at scale including continuous integration (preferred: Jenkins), automated testing, and platform monitoring (preferred: JMX, Icinga, Grafana, Graphite).\n\n * Demonstrated proficiency building and modifying Java applications in Linux environments (using Git, SVN), and ideally also a C++ developer.\n\n * Significant orchestration expertise with the Ansible (preferred), Chef, or Puppet deployment automation system in a Cloud environment (at least a dozen servers, ideally more).\n\n * Direct experience in the design, implementation, and maintenance of SaaS APIs in Java that are minimal, efficient, scalable, and supportable throughout their lifecycle (OpenLDAP).\n\n * Solid track record of making effective design decisions balancing near-term and long-term objectives.\n\n * Know when to use commercial or open-source solutions, when to delegate to a teammate, and when to roll up your sleeves and code it yourself.\n\n * Work effectively in agile teams with remote members; get stuff done with minimal guidance and zero BS, help others, and know when to ask for help.\n\n * Clearly communicate complex technical and product issues to non-technical team members, managers, clients, etc. \n\n\n\nNice-to-haves:\n\n * Proficiency with Cisco, Juniper, and other major network hardware platforms, as well as ISO layer 1 and 2 protocols.\n\n * Experience with Internet routing protocols such as BGP.\n\n * Implementation of software defined networking or other non-traditional networking paradigms.\n\n * Proficiency with SSL, TLS, PGP, and other standard crypto protocols and systems.\n\n * Full-stack development and operations experience with web apps on Node.js.\n\n * Experience with analytics visualization libraries.\n\n * Experience with large-scale analytics and machine learning technologies including TensorFlow/Sonnet, Torch, Caffe, Spark, Hadoop, cuDNN, etc. running in production.\n\n * Conversant with relational, column, object, and graph database fundamentals and strong practical experience in any of those paradigms.\n\n * Deep understanding of how to build software agents and conversational workflows.\n\n * Experience with additional modern programming languages (Python, Scala, …)\n\n\n\nOur stack:\n\n* Java, C++, Python, JavaScript/ECMAscript + Node, Angular, RequireJS, Electron, Scala, etc.\n\n* A variety of open source and in-house frameworks for natural language processing and machine learning, including artificial neural networks / deep learning.\n\n* Hybrid of AWS (EC2, S3, RDS, R53) + dedicated datacenter network, server and GPU/coprocessor infrastructure.\n\n* Cassandra, Aurora plus in-house streaming analytics pipeline (similar to Apache Flink) and indexing/query engine (similar to ElasticSearch).\n\n* In-house messaging frameworks for low-latency (sub-microsecond sensitivity) multicast and global-scale TCP (similarities to protobufs/FixFast/zeromq/itch).\n\n* Ansible, Git, Subversion, PagerDuty, Icinga, Grafana, Observium, LDAP, Jenkins, Maven, Purify, VisualVM, Wireshark, Eclipse, Intellij.\n\nThis position offers a great opportunity to work with advanced technologies, collaborate with a top-notch, global team, and disrupt a highly visible, multi-billion-dollar market. \n\n\n\nCompensation:\n\nWe understand how to attract and retain the best talent and offer a competitive mix of salary, benefits and equity. We also understand how important it is for you to feel challenged, to have opportunities to learn new things, to have the flexibility to balance your work and personal life, and to know that your work has impact in the real world.\n\nWe have team members on four continents and we're adept at making remote workers feel like part of the team. If you join our NYC main office be sure to bring your Nerf toys, your drones and your maker gear - we’re into that stuff, too.\n\nInterview Process:\n\nIf you can see yourself at Selerity, send your resume and/or online profile (e.g. LinkedIn) to [email protected]. We’ll arrange a short introductory phone call and if it sounds like there’s a match we'll arrange for you to meet the team for a full interview. \n\nThe interview process lasts several hours and is sometimes split across two days on site, or about two weeks with remote interviews. It is intended to be challenging - but the developers you meet and the topics you’ll be asked to explain (and code!) should give you a clear sense of what it would be like to work at Selerity. \n\nWe value different perspectives and have built a team that reflects that diversity while maintaining the highest standards of excellence. You can rest assured that we welcome talented engineers regardless of their age, gender, sexual orientation, religion, ethnicity or national origin.\n\nRecruiters: Please note that we are not currently accepting referrals from recruiters for this position. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Java, Senior, Engineer, Crypto, Finance, Cloud, SaaS, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Selerity and want to re-open this job? Use the edit link in the email when you posted the job!
\nSummary:\n\nWe are looking for a Senior DevOps (Site Reliability) Engineer to join Selerity’s team, scaling up an A.I. driven analytics and recommendation platform and integrating it into enterprise workflows. Highly competitive compensation plus significant opportunities for professional growth and career advancement.\n\nEmployment Type: Contract or Full-time\n\nLocation is flexible: We have offices in New York City and Oak Park, Illinois (Chicago suburb) but about half of our team currently works remotely from various parts of Europe, North America, and Asia. \n\n\nJob Description:\n\nWant to change how the world engages with chat, research, social media, news, and data?\n\nSelerity has dominated ultra-low-latency data science in finance for almost a decade. Now our real-time content analytics and contextual recommendation platform is gaining broader traction in enterprise and media applications. We're tackling big challenges in predictive analytics, conversational interfaces, and workflow automation and need your help!\n\nWe’re looking for an experienced DevOps (Site Reliability) Engineer to join a major initiative at a critical point in our company’s growth. The majority of Selerity’s applications are developed in Java and C++ on Linux but knowledge of other languages (especially Python and JavaScript), platforms and levels of the stack is very helpful.\n\n\n\nMust-haves:\n\n * Possess a rock-solid background in Computer Science (minimum BS in Comp Sci or related field) + at least 5 years (ideally 10+) of challenging work experience.\n\n * Implementation of DevOps / SRE processes at scale including continuous integration (preferred: Jenkins), automated testing, and platform monitoring (preferred: JMX, Icinga, Grafana, Graphite).\n\n * Demonstrated proficiency building and modifying Java and C++ applications in Linux environments (using Git, SVN). \n\n * Significant operations expertise with the Ansible (preferred), Chef, or Puppet deployment automation system in a Cloud environment.\n\n * Direct experience in the design, implementation, and maintenance of SaaS APIs that are minimal, efficient, scalable, and supportable throughout their lifecycle (OpenLDAP).\n\n * Solid track record of making effective design decisions balancing near-term and long-term objectives.\n\n * Know when to use commercial or open-source solutions, when to delegate to a teammate, and when to roll up your sleeves and code it yourself.\n\n * Work effectively in agile teams with remote members; get stuff done with minimal guidance and zero BS, help others, and know when to ask for help.\n\n * Clearly communicate complex technical and product issues to non-technical team members, managers, clients, etc. \n\n\n\nNice-to-haves:\n\n * Proficiency with Cisco, Juniper, and other major network hardware platforms, as well as ISO layer 1 and 2 protocols.\n\n * Experience with Internet routing protocols such as BGP.\n\n * Implementation of software defined networking or other non-traditional networking paradigms.\n\n * Proficiency with SSL, TLS, PGP, and other standard crypto protocols and systems.\n\n * Full-stack development and operations experience with web apps on Node.js.\n\n * Experience with analytics visualization libraries.\n\n * Experience with large-scale analytics and machine learning technologies including TensorFlow/Sonnet, Torch, Caffe, Spark, Hadoop, cuDNN, etc.\n\n * Conversant with relational, column, object, and graph database fundamentals and strong practical experience in any of those paradigms.\n\n * Deep understanding of how to build software agents and conversational workflows.\n\n * Experience with additional modern programming languages (Python, Scala, …)\n\n\n\nOur stack:\n\n * Java, C++, Python, JavaScript/ECMAscript + Node, Angular, RequireJS, Electron, Scala, etc.\n\n * A variety of open source and in-house frameworks for natural language processing and machine learning including artificial neural networks / deep learning.\n\n * Hybrid of AWS (EC2, S3, RDS, R53) + dedicated datacenter network, server and GPU/coprocessor infrastructure.\n\n * Cassandra, Aurora plus in-house streaming analytics pipeline (similar to Apache Flink) and indexing/query engine (similar to ElasticSearch).\n\n * In-house messaging frameworks for low-latency (sub-microsecond sensitivity) multicast and global-scale TCP (similarities to protobufs/FixFast/zeromq/itch).\n\n * Ansible, Git, Subversion, PagerDuty, Icinga, Grafana, Observium, LDAP, Jenkins, Maven, Purify, VisualVM, Wireshark, Eclipse, Intellij.\n\nThis position offers a great opportunity to work with advanced technologies, collaborate with a top-notch, global team, and disrupt a highly visible, multi-billion-dollar market. \n\n\n\nCompensation:\n\nWe understand how to attract and retain the best talent and offer a competitive mix of salary, benefits and equity. We also understand how important it is for you to feel challenged, to have opportunities to learn new things, to have the flexibility to balance your work and personal life, and to know that your work has impact in the real world.\n\nWe have team members on four continents and we're adept at making remote workers feel like part of the team. If you join our NYC main office be sure to bring your Nerf toys, your drones and your maker gear - we’re into that stuff, too.\n\n\nInterview Process:\n\nIf you can see yourself at Selerity, send your resume and/or online profile (e.g. LinkedIn) to [email protected]. We’ll arrange a short introductory phone call and if it sounds like there’s a match we'll arrange for you to meet the team for a full interview. \n\nThe interview process lasts several hours and is sometimes split across two days on site, or about two weeks with remote interviews. It is intended to be challenging - but the developers you meet and the topics you’ll be asked to explain (and code!) should give you a clear sense of what it would be like to work at Selerity. \n\nWe value different perspectives and have built a team that reflects that diversity while maintaining the highest standards of excellence. You can rest assured that we welcome talented engineers regardless of their age, gender, sexual orientation, religion, ethnicity or national origin.\n\n\n\nRecruiters: Please note that we are not currently accepting referrals from recruiters for this position. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Crypto, Finance, Java, Cloud, Python, SaaS, Engineer, Apache and Linux jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.