\nBeware of fraudulent recruitment activities impersonating Degreed. Scammers are using our name, โDegreedโ, impersonating our website, and claiming to be affiliated with Degreed as part of a recruitment scam. Please note that Degreed does not recruit talent through WhatsApp, Telegram or any other direct-messaging systems other than Degreed.com e-mail and, during the interview process, phone numbers. We also do not request sensitive personal or financial information in an unsolicited manner, nor do we offer employment opportunities that require upfront payments or promise unrealistic returns.\n\nDegreed is the upskilling platform that connects learning to opportunities. We integrate everything people use to learn and build their careersโskill insights, LMSs, courses, videos, articles, and projectsโand match everyone to growth opportunities that fit their unique skills, roles, and goals.\n\nThe Degreed Client Experience (CX) team plays a crucial role in ensuring customer satisfaction and success. The CX teamsโ deep knowledge allows them to strategically guide clients, providing tremendous value. The CX team actively participates and assists clients with their learning journeys and transformations. The candidate will be working closely with the CX technical teams to design and maintain client support workflows powered by AI.\n\nThis role will be based onsite in Bengaluru, India. After an in-office onboarding period, incumbents are expected to be available in office for a few days per week as part of hybrid work model. Candidates will also be required to travel internationally 1-2 times annually for full company gatherings.\nDay in the Life\n\n\n* Design, develop, and maintain cloud-based AI applications, leveraging a full-stack technology stack to deliver high-quality, scalable, and secure solutions. \n\n* Collaborate with cross-functional teams, including product managers, data scientists, and other engineers, to define and implement analytics features and functionality that meet business requirements and user needs. \n\n* Utilize Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. \n\n* Develop and maintain APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation. \n\n* Implement robust security measures to protect sensitive data and ensure compliance with data privacy regulations and organizational policies. \n\n* Continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. \n\n* Participate in code reviews and contribute to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code. \n\n* Stay current with emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identify opportunities to enhance the capabilities of the analytics platform. \n\n* Collaborate with DevOps and infrastructure teams to automate deployment and release processes, implement CI/CD pipelines, and optimize the development workflow for the analytics engineering team. \n\n* Collaborate closely with and influence business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across a variety of sectors. \n\n* Influence, educate and directly support the analytics application engineering capabilities of our clients \n\n\n\nWho You Are\n\nWe seek outstanding individuals to join our outstanding teams. As a Lead AI Software Engineer you not only want to deliver great products, you also want to collaborate with other great engineers:\n\n\n* 5+ years at Senior or Staff level, or equivalent Software Development Experience\n\n* Experience with client-side technologies such as React, Angular, Vue.js, HTML and CSS \n\n* Experience with server-side technologies such as, Django, Flask, Fast API\n\n* Experience with Cloud platforms and services (AWS, Azure, GCP) via Terraform Automation (good to have)\n\n* 2+ years of Python \n\n* Experience with Git for versioning and collaborating \n\n* Experience with DevOps, CI/CD, Github Actions \n\n* Exposure to LLMs, Prompt engineering, Langchain a plus \n\n* Experience with workflow orchestration - doesnโt matter if itโs dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other \n\n* Experience implementation of large-scale structured or unstructured databases, orchestration and container technologies such as Docker or Kubernetes \n\n* Strong interpersonal and communication skills, including the ability to explain and discuss complex engineering technicalities with colleagues and clients from other disciplines at their level of cognition \n\n* Curiosity, proactivity and critical thinkingโฏ \n\n* Strong computer science fundaments in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and implications of computer architecture on software performance. \n\n* Strong knowledge in designing API interfaces \n\n* Knowledge of data architecture, database schema design and database scalability \n\n* Agile development methodologies \n\n\n\n\n \n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Docker, Travel, DevOps, Cloud, HTML, Git, API, Senior and Engineer jobs that are similar:\n\n
$82,500 — $127,500/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nBengaluru, Karnataka, India
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWhat Youโll Do:\n\nWeโre looking for a talented and intensely curious Senior AWS Cloud Engineer who is nimble and focused with a startup mentality. In this newly created role you will be the liaison between data engineers, data scientists and analytics engineers. You will work to create cutting-edge architecture that provides the increased performance, scalability and concurrency for Data Science and Analytics workflows.\n\n Responsibilities\n\n\n* Provide AWS Infrastructure support and Systems Administration in support of new and existing products implemented thru: IAM, EC2, S3, AWS Networking (VPC, IGW, NGW, ALB, NLB, etc.), Terraform, Cloud Formation templates and Security: Security Groups, Guard Duty, Cloud Trail, Config and WAF.\n\n* Monitor and maintain production, development, and QA cloud infrastructure resources for compliance with all Six Pillars of AWS Well-Architected Framework - including Security Pilar.\n\n* Develop and maintain Continuous Integration (CI) and Continuous Deployment (CD) pipelines needed to automate testing and deployment of all production software components as part of a fast-paced, agile Engineering team. Technologies required: ElastiCache, Bitbucket Pipelines, Github, Docker Compose, Kubernetes, Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS) and Linux based server instances.\n\n* Develop and maintain Infrastructure as Code (IaC) services for creation of ephemeral cloud-native infrastructure hosted on Amazon Web Services (AWS) and Google Cloud Platform (GCP). Technologies required: AWS AWS Cloud Formation, Google Cloud Deployment Manager, AWS SSM, YAML, JSON, Python.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 99.99% uptime. Technologies required: AWS IAM, AWS Cloud Watch, AWS Event Bridge, AWS SSM, AWS SQS, AWS SNS, AWS Lambda and Step Functions, Python, Java, RDS Postgres, RDS MySQL, AWS S3, Docker, AWS Elasticsearch, Kibana, AWS Amplify.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 100% cybersecurity compliance and surveillance. Technologies required: AWS SSM, YAML, JSON, Python, RDS Postgres, Tenable, CrowdStrike EPP, Sophos EPP, Wiz CSPM, Linux Bash scripts.\n\n* Design and code technical solutions that improve the scalability, performance, and reliability of all Data Acquisition pipelines. Technologies required: Google ADs APIs, Youtube Data APIs, Python, Java, AWS Glue, AWS S3, AWS SNS, AWS SQS, AWS KMS, AWS RDS Postgres, AWS RDS MySQL, AWS Redshift.\n\n* Monitor and remediate server and application security events as reported by CrowdStrike EPP, Tenable, WIZ CSPM, Invicti\n\n\n\n\nWho you are:\n\n\n* Minimum of 5 years of System Administration or Devops Engineering experience on AWS\n\n* Track record of success in System Administration, including System Design, Configuration, Maintenance, and Upgrades\n\n* Excels in architecting, designing, developing, and implementing cloud native AWS platforms and services.\n\n* Knowledgeable in managing cloud infrastructure in a production environment to ensure high availability and reliability.\n\n* Proficient in automating system deployment, operation, and maintenance using Infrastructure as Code - Ansible, Terraform, CloudFormation, and other common DevOps tools and scripting.\n\n* Experienced with Agile processes in a structured setting required; Scrum and/or Kanban.\n\n* Security and compliance standards experience such as PCI and SOC as well as data privacy and protection standards, a big plus.\n\n* Experienced in implementing Dashboards and data for decision-making related to team and system performance, rely heavily on telemetry and monitoring.\n\n* Exceptional analytical capabilities.\n\n* Strong communication skills and ability to effectively interact with Engineering and Business Stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Bachelor's degree in technology, engineering, or related field\n\n* AWS Certifications โ Solutions Architect, DevOps Engineer etc.\n\n\n\n\nWhy Spotter:\n\n\n* Medical and vision insurance covered up to 100%\n\n* Dental insurance\n\n* 401(k) matching\n\n* Stock options\n\n* Complimentary gym access\n\n* Autonomy and upward mobility\n\n* Diverse, equitable, and inclusive culture, where your voice matters\n\n\n\n\nIn compliance with local law, we are disclosing the compensation, or a range thereof, for roles that will be performed in Culver City. Actual salaries will vary and may be above or below the range based on various factors including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. A reasonable estimate of the current pay range is: $100-$500K salary per year. The range listed is just one component of Spotterโs total compensation package for employees. Other rewards may include an annual discretionary bonus and equity. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, Testing, DevOps, Cloud, Senior, Engineer and Linux jobs that are similar:\n\n
$50,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLos Angeles, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
**About us:**\n\nFeedStock is a VC backed enterprise SaaS start-up that is working with some of the largest financial and professional services firms in the world. We are a team of bold, ambitious developers, creatives and financial services professionals who are joined by a single vision - to leverage AI to make businesses more profitable. \n \nWe are taking on the largest software segment in the world (CRMs), with a new revolutionary approach to understanding commercial relationships: Client Relationship Analytics (CRA). We automatically capture, structure and measure client activity data across digital touch points to deliver insights like no other. \n\nWe are a small and senior team who operate on a flat management structure โ everyone is encouraged to contribute to product design, strategy and of course, development solutions.\n \nWorking at FeedStock is fast-paced, dynamic and never dull. If you enjoy having a high level of accountability and taking full ownership of your own tasks, as well as working in a collegiate, supporting environment, please get in touch!\n\n**About the role:**\n\nWe are looking to recruit a talented and passionate Python backend developer. You will need to be interested in all things technology, have advanced python coding skills and experience with Linux.\n\nYou will have a strong understanding of software engineering, have strong programming skills and an understanding of automated testing including unit, integration and end-to-end testing. You will have ownership of the whole life-cycle of FeedStockโs product components, from developing new features, fixing bugs, deploying to cloud infrastructure, as well as maintaining the production systems.\n\nYou will report directly to the Head of Development, though we hope for you to retain relative autonomy regarding how to deliver to deadlines in line with the product and technology roadmaps. \n\nWe are looking for someone who has a team mentality and enjoys contributing in an enthusiastic way to the broader discussion about product strategy, to service delivery planning and to mentoring junior developers.\n\n**What you will be doing:**\n\n- Write clear, efficient, tested code\n- Working with the team to ensure system reliability, performance and uptime\n- Taking ownership of the technical architecture and solutions design\n- Maintain and develop ETL data pipelines to ensure usability and accuracy across the entire data infrastructure\n- Integrations into large enterprise IT systems\n- Troubleshoot system issues\n- Contribute to the continuous improvement of internal DevOps tools\n- Build RESTful and GraphQL APIs\n- Learn and use the latest technologies as part of a talented, motivated team\n- Work with the data science team to deliver and optimise AI models\n- FeedStock is an ISO27001 Certified company. All employees are required to complete Information Security training and uphold FeedStockโs Information Security Management System.\n\n**What we are looking for:**\n\n- Exceptional Python coding skills\n- Experience with production environment deployments on AWS public cloud systems\n- Strong experience with relational databases\n- Understanding of Kubernetes\n- Confidence in operating with Linux instances (Ubuntu Server, RedHat) and Docker\n- Enthusiasm and good collaboration\n- Pro-active and self-motivated\n- Interest in new technologies demonstrated through conference attendance and contribution to open source, is a plus\n\n**What you get:**\n\n- The chance to join an exciting start-up as it goes through rapid scaling\n- Pioneering work with leading-edge AI technology\n- Join a passionate, dynamic and fun team that is quickly expanding\n- Flexible working location, with central-London offices\n- Private Health Insurance\n- 25 annual daysโ holiday, with option of carrying 5 days over in to the next year\n- Training and development budget and 3 daysโ additional leave per year to attend conferences/ training opportunities\n- Cycle2Work Scheme \n\nPlease mention the words **ALLEY CRUISE RARE** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4yNTA=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
$50,000 — $80,000/year\n
\n\n#Location\nEurope, GMT timezone +/- 2
# How do you apply?\n\nPlease send CV and introduction to [email protected]
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Source Coders and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 404 3 years ago
\nAs more companies adopt public cloud infrastructure and the increase sophistication and harm caused by cyber attacks, the ability to safeguard companies from these threats have never been more urgent. \n\nLacework’s novel approach to security fundamentally converts cyber security into a big data problem. They are a startup based in Silicon Valley that applies large scale data mining and machine learning to public cloud security. Within a cloud environment (AWS, GCP, Azure), their technology captures all communication between processes/users/external machines and uses advanced data analytics and machine learning techniques to detect anomalies that indicate potential security threats and vulnerabilities. The company is led by an experienced team who have built large scale systems at Google, Paraccel (Amazon Redshift), Pure Storage, Oracle, and Juniper networks. Lacework is well funded by a tier one VC firm and is based in San Jose, CA.\n\nThey are looking for a Senior DevOps engineer with strong AWS and Kubernetes experience who is excited about building an industry leading, next generation Cloud Security System.\n\nYou will be a part of the team that architects, designs, and implements highly scalable distributed systems that provide availability, scalability and performance guarantees. This is a unique and rare opportunity to get in on the ground floor and help shape their technologies, products and business.\n\nRoles/Responsibilities\n\n\n* Assist in managing Technical Operations, Site Reliability, production operations and engineering environments \n\n* Run production operations for their SaaS product\n\n\n* Manage the monitoring System\n\n* Debugging live production issues\n\n* Manage Software release roll-out\n\n\n\n\n\n* Use your engineering skills to promote platform scalability, reliability, manageability and cost efficiency\n\n* Work with the engineering and QA teams to provide your valuable feedback about how to improve the product\n\n* Participate in on-call rotations (but there is really not a lot of work since you will automate everything!)\n\n\n\n\nRequirements:\n\n\n* 4+ years of relevant experience (Technical Operation, SRE, System Administration)\n\n* AWS experience \n\n* Experienced Scripting skills Shell and / or Python \n\n* Eager to learn new technologies\n\n* Ability to define and follow procedures\n\n* Great communication skills\n\n* Computer Science degree \n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Senior, Engineer, Backend, Cloud and SaaS jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Catalyst Repository Systems and want to re-open this job? Use the edit link in the email when you posted the job!
\nWe are recruiting for two positions, one senior and one junior. If you have MarkLogic experience, we'd like to speak with you!\n\n\nMission\n\nThis position will be tasked with the overall performance and availability of Catalyst's MarkLogic NoSQL database installations and will serve as an expert for configuration, performance, and availability in a multi-cluster production environment.\n\nThis job requires the ability to work with a great degree of autonomy, while also having peers under the same responsibility for collaboration. Because the work will have visibility by other department directors and upper management, frequent communication with your manager concerning projects and issues is expected.\n\nOutcomes\n\n\n* Work with the senior MarkLogic Data Engineer to establish/improve active monitoring of the MarkLogic services and develop methods of identifying performance issues and/or usage anomalies.\n\n* Provide data driven evidence to support identification of performance issues and/or usage anomalies; develop methods to test and collect/analyze data for potential solutions; and provide data driven evidence to confirm successful implementation of solutions.\n\n* Collaborate in the planning phase of any new or modified business operation that employs the use of MarkLogic service, acting as an expert advisor.\n\n* Anticipate and plan for platform expansion where necessary, keeping in mind cost savings (acquisition and recurring) and expansion options.\n\n* Establish/maintain MarkLogic server security guidelines, practices, and department procedures with the ability to audit all security actions (SOC 2).\n\n* Establish/maintain effective customer points of contact (development, infrastructure, customer-facing departments, MarkLogic contracted support) for collaboration and troubleshooting projects related to existing and new business.\n\n* Provide visibility to and understanding of cost to your customers for their use of platform resource consumption, as an aid in planning out their use of the limited resources.\n\n* Learn the internally developed analytics processes and collaborate to bring them closer to real-time.\n\n\n\n\nCompetencies\n\n\n* Has a deep knowledge of MarkLogic NoSQL server to include administration, knowledge of XML, proficiency in writing complex XQuery for MarkLogic operations, and facility with accessing the REST API used to access the data service.\n\n* Proficient in investigating issues related to hardware issues, improper configuration, and data service usage.\n\n* Understands and can implement and troubleshoot High Availability solutions like replication and failover.\n\n* Has a practiced understanding of MarkLogic installation, backup and restoration, and failover and recovery.\n\n* Has a good understanding of dependent system hardware, storage subsystems, and networking.\n\n* Can work work directly with project managers, infrastructure engineers and software developers to design and implement data systems that meet business requirements.\n\n* Has strong analytical skills as applied to information technology and can work independently of others in an assignment.\n\n* Raises awareness on issues that can negatively impact delivering on time and to specifications.\n\n* Communicates concepts and instructions clearly both verbally and in written form.\n\n* Understands how to accurately and tactfully address technical needs against budgetary considerations.\n\n* Has a desire to keep current with knowledge of technologies, learning new technologies, mastering them, and distributing that knowledge to teammates.\n\n* Can speak honestly, openly and tactfully with both managers and customers as a collaborator, adviser or lead.\n\n* Programming experience, preferably in R or Python (NumPy and SciPy), is a plus.\n\n* Bachelor’s Degree in Computer Science is a plus as it would be expected to provide a solid theoretical background in operating systems, database management, programming, and mathematics/statistics.\n\n* An understanding of container technology (e.g. Docker) to include container orchestration (e.g., Kubernetes, Docker Swarm, etc.) is a plus.\n\n* An understanding of DevOps methodologies and tools to automate software provisioning, configuration management, and application deployment (e.g., Ansible, Puppet, etc.) is a plus.\n\n* An understanding of version control systems (e.g., Git, Subversion, etc.) is a plus.\n\n\n\n\nOther Information\n\n\n* All work and team collaboration can be performed remotely.\n\n* Must be authorized to work in the United States.\n\n* Must be able to pass a Federal criminal background.\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, DevOps, NoSQL, Python, API and Senior jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Federated Wireless and want to re-open this job? Use the edit link in the email when you posted the job!
\nFederated Wireless is a dynamic, fast-paced, cutting-edge software company that is leading the wireless industry through the shared spectrum revolution.\n\nFederated Wireless is disaggregating the wireless networks to allow for new disruptive models for fast, low-cost cloud enabled wireless connectivity solutions. We are taking advantage of the latest cloud services and implementing advanced algorithms to fully automate service creative and delivery. We are looking for leaders who want to revolutionize the way wireless networks are built.\n\nFederated Wireless is led by CEO Iyad Tarazi and a team of industry veterans who continue to build on this heritage, pioneering new territory in the commercialization of shared spectrum.\n\nThe Role:\n\nThe Senior Software Engineer is a full stack software engineer who is expected to be able to independently design, develop, deploy and maintain the assigned projects in all phases of the agile development and deployment life cycle. She or he will be required to design components or sub-components and then follow through with the coding, testing and integration of all components. Self-motivation, teamwork and experience working in a fast-paced agile environment are highly desired.\n\nResponsibilities\n\n\n* Designs, develops, tests, and documents Cloud-based as well as stand-alone services with support for RESTful APIs\n\n* Provides complete ownership of application or feature (design, development, testing, deployment, support) within the team\n\n* Implements queries to relevant databases\n\n* Configures automated system integration through CI/CD\n\n* Tracks different aspects of development and testing work in an Agile process\n\n* Creates automated unit tests, integration tests, stress/load tests and tracks found bugs using scripting languages and automation frameworks\n\n* Assists with product studies, performs requirements analysis, and develops software architectures to meet requirements\n\n* Creates technical proposals and white papers, writes functional and design specifications\n\n* Follows security guidance in the development process as well as in SW design\n\n* Measures compliance against standards where relevant\n\n\n\n\nGeneral skills and Education:\n\n\n* 5+ years of experience\n\n* Excellent oral and written communication skills\n\n* BS or MS in Software Engineering, Computer Science, or Computer Engineering\n\n\n\n\nAnd experience in one of the following categories:\n\nSoftware Development and Engineering:\n\n\n* Programming in Java, C/C++, scripting (example Python, Javascript, Ruby)\n\n* Database: SQL, No-SQL\n\n* RESTful server and client implementations\n\n* Git source code control\n\n* Strong knowledge of open-source libraries/packages\n\n* Full stack web development experience (front-end GUI and back-end server development)\n\n* Experience with automation and devops technologies (such as puppet, chef, ansible,etc)\n\n* Experience with Test-driven development methodology\n\n* Experience with Agile development and CI/CD pipelines – familiarity with Jira/Atlassian and Jenkins (or similar solution) desirable\n\n\n\n\nCloud:\n\n\n* Experience with Cloud platforms, and designing and orchestrating applications for scale\n\n* Familiarity with AWS Cloud and native services such as EC2, ECS, EBS, S3, Dynamo, EFS, CloudFront, Cloud watch desirable\n\n* 3rd Party Cloud services such as MongoDB for No-SQL storage, ELK for Analytics desirable\n\n* Familiarity with automated verification frameworks for Cloud applications desirable\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Engineer, Full Stack, Developer, Digital Nomad, DevOps, Cloud and Senior jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.