Position Summary\n\nEffectual Cloud Networking Consultants are members of the Public Sector Program Management team responsible for ensuring that customer-facing projects are delivered with customer satisfaction and technical excellence. Effectual Cloud Networking Consultants are "Brand Ambassadors" and are expected to stay current on leading practices to deliver high-quality, well-conceived solutions to customers.\nA Glimpse into the Daily Routine of a Cloud Networking Consultant\n\nLead networking of state of the art, automated, fault-tolerant, and scalable AWS environments adhering to AWS best practices in standard and GovCloud regions. Collaborate with cross-functional teams, including external agencies, internal customer teams, application teams and cloud architects, to design and implement secure and scalable cloud networking architectures. Assess network performance and troubleshoot any connectivity issues during the migration process. Provide expert guidance on network automation, security, and compliance considerations specific to the public sector agency. Stay updated on emerging cloud networking technologies and industry best practices. Designing and implementing networking solutions, troubleshooting network issues, and providing strategic advice for successful application migrations to the cloud.\nResponsibilities\n\n\n* Collaborate with clients to understand their application networking requirements, dependencies, and performance expectations for successful migrations\n\n* Collaborate with application teams, cloud architects, and engineers to ensure smooth integration of application networking components within the cloud environment\n\n* Experience with designing, maintaining, and provide subject matter expertise for network connections within and interfaces outside current network boundaries\n\n* Assess existing network architectures and design cloud networking solutions that ensure secure, scalable, and high-performance connectivity that adhere to industry regulations and compliance frameworks\n\n* Develop comprehensive network migration plans, including network discovery, assessment, and planning for transitioning application networks to the cloud\n\n* Design and implement cloud networking architectures, including virtual private clouds (VPCs), subnets, routing, load balancing, and connectivity options, aligning with best practices for application migration\n\n* Configure and optimize cloud networking services, including but not limited to Amazon VPC, AWS Transit Gateway, AWS Direct Connect, or AWS Network Firewall, to provide seamless and reliable network connectivity for migrated applications\n\n* Conduct network performance analysis, monitoring, and troubleshooting during the migration process to identify and resolve connectivity and performance issues affecting application performance\n\n* Implement network security measures, including firewalls, access controls, and encryption, to protect application networking environments in the cloud.\n\n* Assist in evaluating and selecting appropriate network monitoring and management tools for cloud networking environments to ensure ongoing visibility and control\n\n* Stay updated with emerging cloud networking technologies, industry trends, and best practices related to application migrations to the cloud\n\n\n\nQualifications\n\n\n* Minimum Education: Bachelor's degree in related discipline AND\n\n* Minimum Experience: 4 years' experience in networking preferably specialized cloud technologies OR\n\n* Substitution/Alternative to Minimum Education and Experience: Must have at least 12 years of on-the-job experience\n\n* Be able to work remotely but, be able to go on-site as requested and/or occasionally with potentially some form of post-pandemic cadence, on-site in Washington DC\n\n* AWS Professional Certifications โ Advanced Networking Specialty or Solutions Architect and/or DevOps Certifications; CompTIA Network+\n\n* Must be a US Citizen\n\n* Proven experience as a networking consultant, specifically supporting application migrations to the cloud\n\n* Strong knowledge of networking concepts, protocols, and architectures, including TCP/IP, VLANs, VPNs, load balancing, and application-level networking\n\n* Expertise in designing and implementing network solutions in public cloud environments, primarily AWS, with a focus on application migration scenarios\n\n* Strong understanding of applying Trusted Internet Connections (TIC) 3.0 to application workloads\n\n* Proficiency in cloud networking services, such as Amazon VPC, Azure Virtual Network, or Google Cloud VPC, and their capabilities for supporting application migrations\n\n* Familiarity with network security principles and best practices, including firewalls, access controls, and encryption, within cloud environments\n\n* Experience with network discovery and assessment tools for evaluating application network architectures, dependencies, and performance requirements\n\n* Strong understanding of network performance optimization and troubleshooting methodologies during application migrations to the cloud\n\n* Knowledge of network automation and orchestration tools, scripting languages (Python, PowerShell), and network APIs for migration automation\n\n* Excellent communication and consulting skills to effectively interact with clients, understand their application migration requirements, and provide technical guidance\n\n* Ability to assess and analyze complex networking requirements and translate them into practical cloud networking designs for successful application migrations\n\n* Leadership and mentoring abilities to guide junior consultants and contribute to team collaboration during application migration engagements\n\n* Ability to work with multiple clients, in parallel\n\n* Ability to work Eastern Standard Time Zone schedule\n\n\n\nNice-to-Have Skills and Experience\n\n\n* Active Clearance or Public Trust (DOJ Preferred)\n\n* Understanding of Zero Trust Framework\n\n* Network Automation and Orchestration (Ansible, Chef, Puppet)\n\n* Network Function Virtualization (NFV) and Software-Defined Networking (SDN)\n\n* Network Monitoring and Analytics (Nagios, Zabbix, Prometheus, ELK Stack)\n\n* Network Security Certifications (CCNP Security, CISSP, CEH)\n\n* Cloud-Native Networking Services (AWS Transit Gateway, Azure Virtual WAN, Google Cloud Network Connectivity Center)\n\n* Network Performance Optimization (Traffic Engineering, QoS)\n\n* Network Troubleshooting and Debugging (tcpdump, Wireshark)\n\n* Container Networking (Kubernetes, Service Mesh)\n\n\n\n\nLocation: Remote\n\nSalary Range for this position: $130,000-175,000\nTravel Requirements\n\nThe travel requirements for this position may vary depending on our needs. You should be prepared to travel domestically as necessary. Travel frequency and duration will be communicated in advance, allowing for proper planning and coordination. Typically, travel may include attending conferences, client meetings, training sessions, and other business-related events. The ability to travel is essential for fulfilling the responsibilities of this role and supporting our organization's goals and objectives.\n\n \nCompany Offered Benefits\n\nFull-time employees are eligible to participate in our employee benefit programs:\n\n\n* Medical, dental, and vision health insurances,\n\n* Short term disability, long term disability and life insurances,\n\n* 401k with Company match\n\n* Paid time off (PTO) (120 hours PTO that accrue over one year)\n\n* Paid time off for major holidays (14 days per year)\n\n* These and any other employee benefit offerings are subject to management's discretion and may change at any time.\n\n\n\n\nPHYSICAL DEMANDS AND WORK ENVIRONMENT\n\nThe work is generally performed in an office environment. Physical demands include sitting, keyboarding, verbal communication, written communication. Employees are occasionally required to stand; walk; reach with hands and arms; climb or balance; and stoop, kneel, crouch, or crawl. The physical demands described here are representative of those that must be met by an employee to perform the essential functions of this position. Reasonable accommodation may be made to enable individuals with disabilities to perform the functions.\n\nThis job description may not be inclusive of all assigned duties, responsibilities, or aspects of the job described, and may be amended anytime at the sole discretion of the Employer. Duties and responsibilities are subject to possible modification to reasonably accommodate individuals with disabilities. To perform this job, the incumbents will possess the skills, aptitudes, and abilities to perform each duty proficiently. This document does not create an employment contract, implied or otherwise, other than an "at will" relationship. Effectual Inc. is an EEO employer and does not discriminate on the basis of any protected classification in its hiring, promoting, or any other job-related opportunity. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Amazon, DevOps, Education and Cloud jobs that are similar:\n\n
$30,000 — $70,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nJersey City, New Jersey, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe are seeking a skilled Platform Engineer to join our team. They will be instrumental in developing and optimizing our cloud architecture as well as managing our development and analytics tooling. The ideal candidate has a proven track record of maintaining and developing cloud-based architecture which supports large-scale data initiatives for data science and software development teams. \n \nThe Platform Engineer will be responsible for designing, implementing, and managing cloud architectures to meet business requirements. The ideal candidate possesses expertise in translating business needs into scalable cloud solutions while ensuring security, reliability, and cost-effectiveness.\n\n\n\nResponsibilities:\n* Collaborate with business stakeholders to understand the product requirements and translate them into scalable and resilient cloud architectures.\n* Collaborate closely with Data Engineering, Data Science and Software Development teams to contribute to the design of the cloud solutions for our products.\n* End-to-end implementation of the optimized and secure cloud-based/cloud-native architecture. \n* Develop and document cloud architecture designs, ensuring alignment with industry best practices.\n* Provide expertise in cloud and platform engineering to the Product Data team, ensuring alignment with the company's strategic goals.\n* Contribution to the selection and integration of cloud-based vendors, tools and frameworks.\n* Keep up with emerging trends in cloud engineering and introduce new technologies or practices that can benefit the organization.\n* Implement security measures to safeguard cloud environments, including identity and access management, encryption, and compliance controls.\n* Conduct regular security assessments and address vulnerabilities promptly.\n* Monitor and optimize cloud infrastructure for performance, cost, and reliability.\n* Implement performance tuning strategies to enhance overall system efficiency.\n* Continuous improvement and innovation for cloud and platform engineering.\n* Implement and manage the provisioning of cloud resources based on project requirements.\n* Responsible for the maintenance and support of cloud-based and cloud-native architecture including access controls, security and networking.\n* Configure and fine-tune cloud infrastructure components for optimal performance.\n* Perform audits and assessments of cloud environments to ensure compliance with security and regulatory standards.\n* Provide recommendations for continuous improvement and adherence to best practices.\n* Lead the deployment of applications onto our cloud platform, ensuring seamless integration and functionality.\n* Manage and monitor cloud applications to maintain performance, availability, and scalability.\n\n\n\nQualifications:\n* 3-5 years of proven experience as a Platform Engineer or similar role in designing, implementing, and managing cloud architectures.\n* Expertise in constructing, installing, and maintaining large-scale cloud-native and cloud-based architecture.\n* Database management expertise: Postgres, Snowflake, Lucene-based search engines (Apache Solr/AWS OpenSearch/Elastic Search)\n\n\n* Cloud-Native tooling expertise: Amazon S3, AWS EMR, Amazon EC2 Amazon RDS, Amazon Sagemaker, Amazon ECS, Amazon ECR, Amazon VPC, Amazon IAM (*alternatives from other cloud providers are acceptable) \n* Cloud-based Application tooling: Databricks administration\n* Strong communication in English. Ability to communicate technical concepts to non-technical audience.\n* Cloud Certifications from Cloud providers (AWS. GCP, Azure) \n* Experience with Streaming Technologies such as Apache Kafka and AWS Kinesis\n* Experience Productionizing ML based cloud solutions.\n\n\n\n\n\n#LI-RT9\n\n\nEdelman Data & Intelligence (DXI) is a global, multidisciplinary research, analytics and data consultancy with a distinctly human mission.\n\n\nWe use data and intelligence to help businesses and organizations build trusting relationships with people: making communications more authentic, engagement more exciting and connections more meaningful.\n\n\nDXI brings together and integrates the necessary people-based PR, communications, social, research and exogenous data, as well as the technology infrastructure to create, collect, store and manage first-party data and identity resolution. DXI is comprised of over 350 research specialists, business scientists, data engineers, behavioral and machine-learning experts, and data strategy consultants based in 15 markets around the world.\n\n\nTo learn more, visit: https://www.edelmandxi.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Cloud and Engineer jobs that are similar:\n\n
$70,000 — $100,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
About This Role\n\nHello prospective pickle! Design Pickle is looking for a Data Engineer to join our team and help us develop new ways to inspire our customers and streamline processes for our global network of creatives. You will be tasked to build and maintain the right data pipelines and data models to power our decision-making and enable actionable insights.\n\nThe ideal candidate will be able to create efficient, flexible, extensible, and scalable data models, ETL designs, and data integration services. They will also be required to support and manage the growth of these data solutions. Given our aspirational vision, to be the most helpful creative platform in the world, and the nature of our products, this role requires entrepreneurial drive and thinking, comfort with ambiguity, and the ability to break down and solve complex problems.\n\nIf you have ever wanted to make a significant contribution and help shape the trajectory of a startup, this role is for you! \n\nReports to: Director Data Science & Analytics\n\nOn a daily basis, works closely with: Engineering, Product Management, Product Marketing and Global Operations.\n\nLocation: Design Pickle is a fully remote company with a Company Hub in Scottsdale, Arizona. \nWho We Are Looking For\n\nFirst, Design Pickle is anything but typical. Weโre a group of hard-working, creativity-loving individuals from around the world.\n\nDo we love pickles, too? Most of us! But donโt stress if pickles arenโt your thing. Itโs not a deal-breaker. We do look for a passion and interest in something though because our employeesโ uniqueness is what helped make us the great company we are today. \n\nWe stand by our vision, purpose, and values, and these are mission-critical to how you show up every single day.\n\nSpecific to your role, weโre looking for individuals who have...\n\n\n* A robust background with at least two years dedicated to software development, encompassing the full spectrum of the product lifecycle. This includes ideation, development, deployment, and iteration.\n\n* A minimum of three years' expertise in crafting and optimizing SQL queries. Candidates should be well-versed in manipulating and extracting data to meet business needs.\n\n* Over two years of hands-on experience in ETL (Extract, Transform, Load) processes, showcasing proficiency in designing, implementing, and maintaining robust ETL pipelines.\n\n* At least two years of programming experience with a focus on object-oriented languages, such as Python. \n\n* A minimum of two years in database schema design and dimensional data modeling, illustrating a deep understanding of how to structure and model data effectively for scalability and performance.\n\n* Proven experience in the data warehousing field, indicating a solid foundation in managing large-scale data storage solutions.\n\n* Demonstrated ability to analyze datasets to uncover discrepancies and inconsistencies, thereby ensuring data quality and reliability.\n\n* Practical experience with Amazon Web Services (AWS), including but not limited to S3, Redshift, and Machine Learning services. Candidates should be comfortable leveraging these services to enhance data storage, processing, and analytics capabilities.\n\n* Expertise in managing and clearly communicating plans for data sourcing and pipeline development to stakeholders within the organization, ensuring alignment and understanding across teams.\n\n* Exceptional problem-solving abilities, with a knack for navigating through unclear requirements and delivering effective solutions.\n\n\n\n\nBonus Pickle Points: \n\n\n* A Bachelor's or Master's degree in Computer Science, a related technical field, or equivalent practical experience.\n\n* Additional experience with AWS, specifically in managing Data Lakes, is highly regarded.\n\n* Familiarity with building and utilizing reports in business intelligence tools such as PowerBI and Tableau, enhancing decision-making and insights.\n\n* Proficiency in Ruby on Rails, adding value through versatile web development skills.\n\n* A proven track record of working independently within globally distributed teams, showcasing effective communication and collaboration across different time zones.\n\n* Demonstrated capacity to leverage data in influencing pivotal business decisions, underlining the strategic use of insights in driving outcomes.\n\n\n\nKey Objectives and Responsibilities \n\nAs a fast-growing company, our roles are always evolving. However, we want you to know exactly what youโre walking into. In the first 90-days, here is a preview of whatโs expected:\n\n\n* Conceptualize and own the data architecture for our suite of tools and analytics platform.\n\n* Create and contribute to frameworks that improve the efficiency of logging data, while working with data infrastructure to troubleshoot and resolve issues.\n\n* Collaborate with engineers, product managers, product design and product marketing to understand data needs, representing key data insights in a meaningful and actionable way.\n\n* Define and manage SLA for all data sets.\n\n* Determine and implement the security model based on security and privacy requirements, confirm safeguards are followed, address data quality issues and evolve governance processes.\n\n* Design, build and launch sophisticated data models and visualizations that support our products and global operational processes.\n\n* Solve data integration problems, utilizing ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources.\n\n* Optimize pipelines, dashboards, frameworks, and systems to streamline development of data artifacts.\n\n* Mentor team members for best practices in the data engineering space.\n\n* Commitment to documentation.\n\n\n\n$100,000 - $115,000 a year\nThe compensation range for this position $100,000 to $115,000 annually. The actual salary offer to a candidate will be made with mindful consideration of many factors. These factors include but are not limited to skills, qualifications, education/knowledge, experience, and alignment with market data for a given location within the US. In addition to base salary, some positions may be eligible for additional forms of compensation such as bonuses or commissions. This salary data is for our US-based positions only.\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Ruby, Marketing and Engineer jobs that are similar:\n\n
$77,500 — $117,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nScottsdale, Arizona, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWhat Youโll Do:\n\nWeโre looking for a talented and intensely curious Senior AWS Cloud Engineer who is nimble and focused with a startup mentality. In this newly created role you will be the liaison between data engineers, data scientists and analytics engineers. You will work to create cutting-edge architecture that provides the increased performance, scalability and concurrency for Data Science and Analytics workflows.\n\n Responsibilities\n\n\n* Provide AWS Infrastructure support and Systems Administration in support of new and existing products implemented thru: IAM, EC2, S3, AWS Networking (VPC, IGW, NGW, ALB, NLB, etc.), Terraform, Cloud Formation templates and Security: Security Groups, Guard Duty, Cloud Trail, Config and WAF.\n\n* Monitor and maintain production, development, and QA cloud infrastructure resources for compliance with all Six Pillars of AWS Well-Architected Framework - including Security Pilar.\n\n* Develop and maintain Continuous Integration (CI) and Continuous Deployment (CD) pipelines needed to automate testing and deployment of all production software components as part of a fast-paced, agile Engineering team. Technologies required: ElastiCache, Bitbucket Pipelines, Github, Docker Compose, Kubernetes, Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS) and Linux based server instances.\n\n* Develop and maintain Infrastructure as Code (IaC) services for creation of ephemeral cloud-native infrastructure hosted on Amazon Web Services (AWS) and Google Cloud Platform (GCP). Technologies required: AWS AWS Cloud Formation, Google Cloud Deployment Manager, AWS SSM, YAML, JSON, Python.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 99.99% uptime. Technologies required: AWS IAM, AWS Cloud Watch, AWS Event Bridge, AWS SSM, AWS SQS, AWS SNS, AWS Lambda and Step Functions, Python, Java, RDS Postgres, RDS MySQL, AWS S3, Docker, AWS Elasticsearch, Kibana, AWS Amplify.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 100% cybersecurity compliance and surveillance. Technologies required: AWS SSM, YAML, JSON, Python, RDS Postgres, Tenable, CrowdStrike EPP, Sophos EPP, Wiz CSPM, Linux Bash scripts.\n\n* Design and code technical solutions that improve the scalability, performance, and reliability of all Data Acquisition pipelines. Technologies required: Google ADs APIs, Youtube Data APIs, Python, Java, AWS Glue, AWS S3, AWS SNS, AWS SQS, AWS KMS, AWS RDS Postgres, AWS RDS MySQL, AWS Redshift.\n\n* Monitor and remediate server and application security events as reported by CrowdStrike EPP, Tenable, WIZ CSPM, Invicti\n\n\n\n\nWho you are:\n\n\n* Minimum of 5 years of System Administration or Devops Engineering experience on AWS\n\n* Track record of success in System Administration, including System Design, Configuration, Maintenance, and Upgrades\n\n* Excels in architecting, designing, developing, and implementing cloud native AWS platforms and services.\n\n* Knowledgeable in managing cloud infrastructure in a production environment to ensure high availability and reliability.\n\n* Proficient in automating system deployment, operation, and maintenance using Infrastructure as Code - Ansible, Terraform, CloudFormation, and other common DevOps tools and scripting.\n\n* Experienced with Agile processes in a structured setting required; Scrum and/or Kanban.\n\n* Security and compliance standards experience such as PCI and SOC as well as data privacy and protection standards, a big plus.\n\n* Experienced in implementing Dashboards and data for decision-making related to team and system performance, rely heavily on telemetry and monitoring.\n\n* Exceptional analytical capabilities.\n\n* Strong communication skills and ability to effectively interact with Engineering and Business Stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Bachelor's degree in technology, engineering, or related field\n\n* AWS Certifications โ Solutions Architect, DevOps Engineer etc.\n\n\n\n\nWhy Spotter:\n\n\n* Medical and vision insurance covered up to 100%\n\n* Dental insurance\n\n* 401(k) matching\n\n* Stock options\n\n* Complimentary gym access\n\n* Autonomy and upward mobility\n\n* Diverse, equitable, and inclusive culture, where your voice matters\n\n\n\n\nIn compliance with local law, we are disclosing the compensation, or a range thereof, for roles that will be performed in Culver City. Actual salaries will vary and may be above or below the range based on various factors including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. A reasonable estimate of the current pay range is: $100-$500K salary per year. The range listed is just one component of Spotterโs total compensation package for employees. Other rewards may include an annual discretionary bonus and equity. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, Testing, DevOps, Cloud, Senior, Engineer and Linux jobs that are similar:\n\n
$50,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLos Angeles, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Remote Senior Software Engineer Content Services Data
\nApplication Deadline: Monday, November 20th at 5:00 PM EST \n\nEach day you will work with a cross-functional team of brilliant professionals combining business, design, product, user experience and engineering expertise, working relentlessly to push the boundaries of whatโs possible and paving the road for the future of news and entertainment media.\n\nThe Audience Technology group is looking for an experienced, talented and knowledgeable Senior Software Engineer to join the Data/Content Services team responsible for developing, supporting and maintaining our data and analytics products and services. These services are used to highlight key trends and insights across podcasts, web, mobile apps and social media. These datasets are used to extract insights from complex media usage in order to inform stakeholders both at NPR and at Member Stations across the country. \n\nAs a senior software engineer on our team, you will be met with exciting challenges to iterate on existing systems and build new datasets, dashboards and pipelines for analyzing trends in audience engagement.\n\nIn addition, the team is responsible for core backend APIs and other services that are responsible for podcast distribution, as well as content delivery for the NPR.org homepage, topic stories, and local and national newscasts. Our stakeholders range from local member stations around the country to key business stakeholders inside of NPR. Come join us and make an impact for the NPR mission!\n\nThis is a union represented role covered under the terms of a collective bargaining agreement with DMU. \n\nRESPONSIBILITIES\n\n\n* Support the NPR Content Services and Analytics team in data analytics, dashboarding, and pipelining. \n\n* Write clean, efficient and reusable code based on product requirements\n\n* Participate in all phases of quality assurance and defect resolution\n\n* Aid in the development and maintenance of CI/CD pipeline implementations\n\n* Knowledge share, write technical designs & participate in code reviews\n\n* Mentor and coach mid-level engineers on code quality and best practices\n\n* Consult with lead and senior engineers while designing comprehensive solutions \n\n* Provide input on system design and architecture within the feature areas and services owned by the team\n\n* Work closely with other software engineers, partner teams, infrastructure engineers, product designers, QA engineers, engineering managers and product managers\n\n* Improve team/development processes\n\n* Join agile ceremonies, including daily stand-ups, sprint retros, sprint reviews and more\n\n* Join our on-call rotation\n\n* Other duties as assigned\n\n\n\n\nThe above duties and responsibilities are not an exhaustive list of required responsibilities, duties and skills. Other duties may be assigned, and this job description can be modified at any time.\n\nMINIMUM QUALIFICATIONS\n\n\n* Fluency in Python, LookML and other data based languages\n\n* Working knowledge of BigQuery or similar (Redshift, Azure, Snowflake, etc.)\n\n* Prior experience working with business intelligence tools like Looker or similar (Tableau, Power BI, Mode, etc.)\n\n* Familiarity with SQL/DML and RDBMS technologies \n\n* Fluency in JavaScript / TypeScript\n\n* Experience in developing and working with RESTful APIs that utilize cloud infrastructure such as AWS\n\n* Ability to develop software that is scalable and performant under high loads.\n\n* Strong Object-Oriented programming skills \n\n* Familiarity with deploying and monitoring production systems\n\n* Experience writing unit and other automated tests using tools like Postman and Jest.\n\n* Knowledge of web development best practices, coding standards, code reviews, source control management, build processes, deployment, rollback, testing, monitoring\n\n\n\n\nPREFERRED QUALIFICATIONS\n\n\n* Familiarity with R for advanced data analysis\n\n* Experience using APIs to retrieve analytics data\n\n* Excellent problem solving, analysis and data interpretation skills with a keen sense for data inconsistencies.\n\n* Experience with NoSQL databases (e.g. Elasticsearch, DynamoDB)\n\n* Familiarity with Salesforce platform\n\n* Advanced experience with Amazon AWS or equivalent cloud computing platform, including Lambda, SNS, EC2, ASGs, ElastiCache, DynamoDB, RDS and CodeDeploy\n\n* Advanced experience with Google Cloud Platform including BigQuery Omni, Cloud Functions, Dataplex and Composer.\n\n* Experience with CI/CD pipelines (Github Actions, Jenkins, CodeFresh, TravisCI or equivalent)\n\n* Experience using observability and log aggregation platforms (Datadog, CloudWatch)\n\n* Familiarity with different caching layers of caching (browser, DNS, web server, application, etc) and caching technologies/services (Redis, Elasticache, CDNs, AWS CloudFront)\n\n* A passion for NPRโs content and/or familiarity with our digital products\n\n\n\n\nWORK LOCATION\n\nRemote Permitted: This is a remote permitted role. This role is based out of our Washington, DC office but the employee may choose to work on a remote basis from a location that NPR approves.\n\nJOB TYPE\n\nThis is a full time, exempt position.\n\nCOMPENSATION\n\nSalary Range: The U.S. based anticipated salary range for this opportunity is $126,541 - $134,248 plus benefits. The range displayed reflects the minimum and maximum salaries NPR expects to provide for new hires for the position across all US locations.\n\nBenefits: NPR offers access to comprehensive benefits for employees and dependents. Regular, full-time employees scheduled to work 30 hours or more per week are eligible to enroll in NPRโs benefits options. Benefits include access to health and wellness, paid time off, and financial well-being. Plan options include medical, dental, vision, life/ accidental death and dismemberment, long-term disability, short-term disability, and voluntary retirement savings to all eligible NPR employees. \n\nDoes this sound like you? If so, we want to hear from you. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Salesforce, JavaScript, Cloud, NoSQL, Mobile, Senior, Engineer and Backend jobs that are similar:\n\n
$50,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nWashington, District of Columbia, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.