This job post is closed and the position is probably filled. Please do not apply. Work for Hotjar and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 410 3 years ago
**Note: although this is a remote position, we are currently only seeking candidates in time zones between UTC-2 and UTC+7.**\n\n\n\nHotjar is looking for a driven and ambitious DevOps Engineer with Big Data experience to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 7500 API requests per second, delivers over a billion pieces of static content every week and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity. As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies. \n\n\n\nThis is an excellent career opportunity to join a fast growing remote startup in a key position.\n\n\n\nIn this position, you will:\n\n\n\n- Be part of our DevOps team building and maintaining our web application and server environment.\n\n- Choose, deploy and manage tools and technologies to build and support a robust infrastructure.\n\n- Be responsible for identifying bottlenecks and improving performance of all our systems.\n\n- Ensure all necessary monitoring, alerting and backup solutions are in place.\n\n- Do research and keep up to date on trends in big data processing and large scale analytics.\n\n- Implement proof of concept solutions in the form of prototype applications.\n\n\n\n\n\n \n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Amazon, Elasticsearch, Python, Engineer, Full Time, Web Developer, DevOps, Cloud and API jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nValletta
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Hotjar and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 410 3 years ago
**Note: although this is a remote position, we are currently only seeking candidates in time zones between UTC-2 and UTC+7.**\n\nHotjar is looking for a driven and ambitious DevOps Engineer with Big Data experience to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 7500 API requests per second, delivers over a billion pieces of static content every week and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity. As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies. \n\nThis is an excellent career opportunity to join a fast growing remote startup in a key position.\n\nIn this position, you will:\n\n- Be part of our DevOps team building and maintaining our web application and server environment.\n- Choose, deploy and manage tools and technologies to build and support a robust infrastructure.\n- Be responsible for identifying bottlenecks and improving performance of all our systems.\n- Ensure all necessary monitoring, alerting and backup solutions are in place.\n- Do research and keep up to date on trends in big data processing and large scale analytics.\n- Implement proof of concept solutions in the form of prototype applications. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Elasticsearch, Python, Engineer, Full Time, Web Developer, Cloud and API jobs that are similar:\n\n
$70,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSaint Julian's
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply. Work for Spinn3r and want to re-open this job? Use the edit link in the email when you posted the job!
Ideal Candidate\nWe're interested in someone comfortable with a generalist and devops role. You should be knowledgeable of standard system administration tasks and have a firm understanding of the role of load balancers and cluster architecture. It's 100x harder to write code if you don't know how the underlying operating system works.\nWe're looking for someone with a legitimate passion for technology, big data, and analyzing vast amounts of content.\nWe are also looking for people outside of the U.S. and Canada to maximize our time zone distribution. Ideally there should be least a 4 hour overlap with the Pacific Standard Time Zone (PST / UTC-8). We're based out of San Francisco but are migrating to the international level. If you don't have a natural time overlap with UTC-8 you should be willing to work evenings to be able to communicate easily with the rest of the team.\nCulturally, weโre a remote company and want to embrace it as a way to reward our employees. We are fine with you working in remote locations as long as youโre generally available for communication and are productive.\nWe want someone to come in full time in a contractor role. We will need about 40 hours from you per week. \nJob Responsibilities:\nUnderstanding our crawler infrastructure and ensuring top quality metadata for our customers. There's a significant batch job component to analyze the output from the crawl to ensure top quality data.\nMaking sure our infrastructure is fast, reliable, fault tolerant, etc. At times this may involve diving into the source of tools like ActiveMQ, Cassandra and understand how the internals work. We contribute a LOT to Open Source development if our changes need to be given back to the community.\nBuilding out new products and technology that will directly interface with customers. This includes cool features like full text search, analytics, etc. It's extremely rewarding to build something from ground up and push it to customers directly. \nArchitecture:\nOur infrastructure consists of Java on Linux (Debian/Ubuntu) with the stack running on ActiveMQ, Cassandra, Zookeeper, and Jetty. We use Ansible to manage our boxes. We have a full-text search engine based on Elasticsearch, and store our firehose API data within Cassandra.\nWe have a totally new stack and infrastructure at this point. We recently did a full-stack rewrite and moved all the old code to our new infrastructure. This means we have very little legacy cruft to deal with.\nHere's all the cool stuff you get to play with:\nLarge Linux / Ubuntu cluster running with the OS versioned using both Ansible and our own debian packages for software distribution.\nMassive amount of data indexed from the web and social media. We index from 5-20TB of data per month and want to expand to 100TB of data per month.\nLarge Cassandra install on SSD. \nSOLR / Elasticsearch migration / install. Weโre experimenting with bringing this up now so it would be valuable to get your feedback.\nTechnical Skills:\nHere's where you shine! we're looking for someone with a number of the following requirements:\nLinux. Linux. Linux. Did I say Linux? We like Linux.\nExperience in modern Java development and associated tools.\nMaven, IntelliJ IDEA, Guice (dependency injection)\nA passion for testing, continuous integration, and continuous delivery.\nCassandra. Stores content indexed by our crawler.\nActiveMQ. Powers our queue server for scheduling crawl work.\nA general understanding and passion for distributed systems.\nAnsible or equivalent experience with configuration management.\nStandard web API use and design. (HTTP, JSON, XML, HTML, etc).\nCultural Fit:\nWeโre a lean startup and very driven by our interaction with customers, as well as their happiness and satisfaction. Our philosophy is that you shouldnโt be afraid to throw away a week's worth of work if our customers arenโt interested in moving in that direction.\nWe hold the position that our customers are 1000x smarter than we are and we try to listen to them intently, and consistently.\nProficiency in English is a requirement. Since you will have colleagues in various countries with various primary language skills we all need to use English as our common company language. You must also be able to work with email, draft proposals, etc. Internally we work as a large distributed Open Source project and use tools like traditional email, Slack, Google Hangouts, and Skype.\nFamiliarity working with a remote team and ability (and desire) to work for a virtual company. \nShould have a home workstation, fast Internet access, etc.\nMust be able to manage your own time and your own projects. \nSelf-motivated employees will fit in well with the rest of the team.\nIt goes without saying but being friendly and a team player is very important.\n\n\nExtra tags: Linux. Linux. Linux. Did I say Linux? We like Linux. Experience in modern Java development and associated tools. Maven, IntelliJ IDEA, Guice (dependency injection) A passion for testing, continuous integration, and continuous delivery. Cassandra. Stores content indexed by our crawler. ActiveMQ. Powers our queue server for scheduling crawl work. A general understanding and passion for distributed systems. Ansible or equivalent experience with configuration management. Standard web API use and design. (HTTP, JSON, XML, HTML, etc). \n\nPlease mention the words **SWING PLASTIC CUBE** when applying to show you read the job post completely (#RMjE2LjczLjIxNi4xMDc=). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to DevOps, Java, HTML, API, Engineer, Linux, Cassandra, Design, Ansible, Testing, Web Developer, Digital Nomad, English and Elasticsearch jobs that are similar:\n\n
$65,000 — $120,000/year\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.