find a remote job
work from anywhere

Get new remote Docker + Elasticsearch + Engineering jobs sent to

Subscribe
×
๐Ÿ‘ฉโ€๐Ÿ’ป Join Remote OK ๐Ÿ‘‹  Log in
General
Remote OK Frontpage ๐Ÿ Remote jobs ๐ŸŒ—  Dark mode ๐Ÿ‘ฉโ€๐Ÿ’ป Hire remote workers ๐Ÿšจ Post a remote job ๐Ÿฑ Compact mode โœ๏ธ Remote work blog new
Top jobs
๐Ÿฆพ  AI Jobs
โฐ Async jobs ๐ŸŒŽ Distributed team ๐Ÿค“ Engineer jobs ๐Ÿ’ผ Executive jobs ๐Ÿ‘ต Senior jobs ๐Ÿค“ Developer jobs ๐Ÿ’ฐ Finance jobs โ™พ๏ธ Sys Admin jobs โ˜•๏ธ JavaScript jobs ๐Ÿ‘ Backend jobs
Companies
๐Ÿšจ Post a remote job ๐Ÿ“ฆ Buy a job bundle ๐Ÿท Ask for a discount Safetywing Health insurance for teams Safetywing Health insurance for nomads
Feeds
๐Ÿ›  Remote Jobs API ๐Ÿชš  RSS feed ๐Ÿช“  JSON feed

Hacker News mode  Hacker News mode

Safe for work mode  Safe for work mode

Help
๐Ÿ’ก  Ideas + bugs ๐Ÿš€  Changelog ๐Ÿ›๏ธ  Merch ๐Ÿ›Ÿ  FAQ & Help
Other projects
๐Ÿ“Š Remote work stats new ๐Ÿ‘ท Top remote companies ๐Ÿ’ฐ Highest paying remote jobs ๐Ÿงช State of remote work new
๐ŸŒ  Become a digital nomad
๐Ÿ”ฎ  Web3 Jobs
๐Ÿ“ธ  Photo AI
๐Ÿก  Interior AI
Post a remote job โ†’ Log in

๐Ÿ‘‰ Hiring for a Remote Docker + Elasticsearch + Engineering position?

Post a job
on the ๐Ÿ† #1 Remote Jobs board.
Minimum
$0k/year
๐Ÿ’ฐ 401(k)
๐ŸŒŽ Distributed team
โฐ Async
๐Ÿค“ Vision insurance
๐Ÿฆท Dental insurance
๐Ÿš‘ Medical insurance
๐Ÿ– Unlimited vacation
๐Ÿ– Paid time off
๐Ÿ“† 4 day workweek
๐Ÿ’ฐ 401k matching
๐Ÿ” Company retreats
๐Ÿฌ Coworking budget
๐Ÿ“š Learning budget
๐Ÿ’ช Free gym membership
๐Ÿง˜ Mental wellness budget
๐Ÿ–ฅ Home office budget
๐Ÿฅง Pay in crypto
๐Ÿฅธ Pseudonymous
๐Ÿ’ฐ Profit sharing
๐Ÿ’ฐ Equity compensation
โฌœ๏ธ No whiteboard interview
๐Ÿ‘€ No monitoring system
๐Ÿšซ No politics at work
๐ŸŽ… We hire old (and young)
Regions
๐ŸŒ Worldwide
โ›ฐ๏ธ North America
๐Ÿ’ƒ Latin America
๐Ÿ‡ช๐Ÿ‡บ Europe
๐Ÿฆ Africa
๐Ÿ•Œ Middle East
โ›ฉ Asia
๐ŸŒŠ Oceania
Countries
๐Ÿ‡บ๐Ÿ‡ธ United States
๐Ÿ‡จ๐Ÿ‡ฆ Canada
๐Ÿ‡ฌ๐Ÿ‡ง United Kingdom
๐Ÿ‡ฆ๐Ÿ‡บ Australia
๐Ÿ‡ณ๐Ÿ‡ฟ New Zealand
๐Ÿ‡ฎ๐Ÿ‡ณ India
๐Ÿ‡ต๐Ÿ‡น Portugal
๐Ÿ‡ฉ๐Ÿ‡ช Germany
๐Ÿ‡ณ๐Ÿ‡ฑ Netherlands
๐Ÿ‡ธ๐Ÿ‡ฌ Singapore
๐Ÿ‡ซ๐Ÿ‡ท France
๐Ÿ‡ญ๐Ÿ‡ฐ Hong Kong
๐Ÿ‡ง๐Ÿ‡ท Brazil
๐Ÿ‡ฌ๐Ÿ‡ท Greece
๐Ÿ‡ฆ๐Ÿ‡ช United Arab Emirates
๐Ÿ‡ธ๐Ÿ‡ช Sweden
๐Ÿ‡ต๐Ÿ‡ฑ Poland
๐Ÿ‡ช๐Ÿ‡ธ Spain
๐Ÿ‡ฒ๐Ÿ‡ฝ Mexico
๐Ÿ‡บ๐Ÿ‡ฆ Ukraine
๐Ÿ‡ฏ๐Ÿ‡ต Japan
๐Ÿ‡น๐Ÿ‡ญ Thailand
๐Ÿ‡จ๐Ÿ‡ฟ Czechia
๐Ÿ‡ท๐Ÿ‡บ Russia
๐Ÿ‡ฎ๐Ÿ‡ฑ Israel
๐Ÿ‡ซ๐Ÿ‡ฎ Finland
๐Ÿ‡จ๐Ÿ‡ณ China
๐Ÿ‡ฎ๐Ÿ‡ฉ Indonesia
๐Ÿ‡ฆ๐Ÿ‡ซ Afghanistan
๐Ÿ‡ฆ๐Ÿ‡ฑ Albania
๐Ÿ‡ฉ๐Ÿ‡ฟ Algeria
๐Ÿ‡ฆ๐Ÿ‡ธ American Samoa
๐Ÿ‡ฆ๐Ÿ‡ฉ Andorra
๐Ÿ‡ฆ๐Ÿ‡ด Angola
๐Ÿ‡ฆ๐Ÿ‡ฎ Anguilla
๐Ÿ‡ฆ๐Ÿ‡ถ Antarctica
๐Ÿ‡ฆ๐Ÿ‡ฌ Antigua and Barbuda
๐Ÿ‡ฆ๐Ÿ‡ท Argentina
๐Ÿ‡ฆ๐Ÿ‡ฒ Armenia
๐Ÿ‡ฆ๐Ÿ‡ผ Aruba
๐Ÿ‡ฆ๐Ÿ‡น Austria
๐Ÿ‡ฆ๐Ÿ‡ฟ Azerbaijan
๐Ÿ‡ง๐Ÿ‡ธ The Bahamas
๐Ÿ‡ง๐Ÿ‡ญ Bahrain
๐Ÿ‡ง๐Ÿ‡ฉ Bangladesh
๐Ÿ‡ง๐Ÿ‡ง Barbados
๐Ÿ‡ง๐Ÿ‡พ Belarus
๐Ÿ‡ง๐Ÿ‡ช Belgium
๐Ÿ‡ง๐Ÿ‡ฟ Belize
๐Ÿ‡ง๐Ÿ‡ฏ Benin
๐Ÿ‡ง๐Ÿ‡ฒ Bermuda
๐Ÿ‡ง๐Ÿ‡น Bhutan
๐Ÿ‡ง๐Ÿ‡ด Bolivia
๐Ÿ‡ง๐Ÿ‡ฆ Bosnia
๐Ÿ‡ง๐Ÿ‡ผ Botswana
๐Ÿ‡ง๐Ÿ‡ป Bouvet Island
๐Ÿ‡ฎ๐Ÿ‡ด British Indian Ocean Territory
๐Ÿ‡ง๐Ÿ‡ณ Brunei
๐Ÿ‡ง๐Ÿ‡ฌ Bulgaria
๐Ÿ‡ง๐Ÿ‡ซ Burkina Faso
๐Ÿ‡ง๐Ÿ‡ฎ Burundi
๐Ÿ‡ฐ๐Ÿ‡ญ Cambodia
๐Ÿ‡จ๐Ÿ‡ฒ Cameroon
๐Ÿ‡จ๐Ÿ‡ป Cape Verde
๐Ÿ‡ฐ๐Ÿ‡พ Cayman Islands
๐Ÿ‡จ๐Ÿ‡ซ Central African Republic
๐Ÿ‡น๐Ÿ‡ฉ Chad
๐Ÿ‡จ๐Ÿ‡ฑ Chile
๐Ÿ‡จ๐Ÿ‡ฝ Christmas Island
๐Ÿ‡จ๐Ÿ‡จ Cocos Islands
๐Ÿ‡จ๐Ÿ‡ด Colombia
๐Ÿ‡ฐ๐Ÿ‡ฒ Comoros
๐Ÿ‡จ๐Ÿ‡ฌ Congo
๐Ÿ‡จ๐Ÿ‡ฉ DR Congo
๐Ÿ‡จ๐Ÿ‡ฐ Cook Islands
๐Ÿ‡จ๐Ÿ‡ท Costa Rica
๐Ÿ‡ญ๐Ÿ‡ท Croatia
๐Ÿ‡จ๐Ÿ‡บ Cuba
๐Ÿ‡จ๐Ÿ‡ผ Curaรงao
๐Ÿ‡จ๐Ÿ‡พ Cyprus
๐Ÿ‡ฉ๐Ÿ‡ฐ Denmark
๐Ÿ‡ฉ๐Ÿ‡ฏ Djibouti
๐Ÿ‡ฉ๐Ÿ‡ฒ Dominica
๐Ÿ‡ฉ๐Ÿ‡ด Dominican Republic
๐Ÿ‡ช๐Ÿ‡จ Ecuador
๐Ÿ‡ช๐Ÿ‡ฌ Egypt
๐Ÿ‡ธ๐Ÿ‡ป El Salvador
๐Ÿ‡ฌ๐Ÿ‡ถ Equatorial Guinea
๐Ÿ‡ช๐Ÿ‡ท Eritrea
๐Ÿ‡ช๐Ÿ‡ช Estonia
๐Ÿ‡ช๐Ÿ‡น Ethiopia
๐Ÿ‡ซ๐Ÿ‡ฐ Falkland Islands
๐Ÿ‡ซ๐Ÿ‡ด Faroe Islands
๐Ÿ‡ซ๐Ÿ‡ฏ Fiji
๐Ÿ‡ฌ๐Ÿ‡ซ French Guiana
๐Ÿ‡น๐Ÿ‡ฑ East Timor
๐Ÿ‡น๐Ÿ‡ซ French Southern Territories
๐Ÿ‡ฌ๐Ÿ‡ฆ Gabon
๐Ÿ‡ฌ๐Ÿ‡ฒ Gambia
๐Ÿ‡ฌ๐Ÿ‡ช Georgia
๐Ÿ‡ฌ๐Ÿ‡ญ Ghana
๐Ÿ‡ฌ๐Ÿ‡ฎ Gibraltar
๐Ÿ‡ฌ๐Ÿ‡ฑ Greenland
๐Ÿ‡ฌ๐Ÿ‡ฉ Grenada
๐Ÿ‡ฌ๐Ÿ‡ต Guadeloupe
๐Ÿ‡ฌ๐Ÿ‡บ Guam
๐Ÿ‡ฌ๐Ÿ‡น Guatemala
๐Ÿ‡ฌ๐Ÿ‡ฌ Guernsey
๐Ÿ‡ฌ๐Ÿ‡ณ Guinea
๐Ÿ‡ฌ๐Ÿ‡ผ Guinea Bissau
๐Ÿ‡ฌ๐Ÿ‡พ Guyana
๐Ÿ‡ญ๐Ÿ‡น Haiti
๐Ÿ‡ญ๐Ÿ‡ฒ Heard Island and McDonald Islands
๐Ÿ‡ญ๐Ÿ‡ณ Honduras
๐Ÿ‡ญ๐Ÿ‡บ Hungary
๐Ÿ‡ฎ๐Ÿ‡ธ Iceland
๐Ÿ‡ฎ๐Ÿ‡ท Iran
๐Ÿ‡ฎ๐Ÿ‡ถ Iraq
๐Ÿ‡ฎ๐Ÿ‡ช Ireland
๐Ÿ‡ฎ๐Ÿ‡ฒ Isle of Man
๐Ÿ‡ฎ๐Ÿ‡น Italy
๐Ÿ‡จ๐Ÿ‡ฎ Cote d'Ivoire
๐Ÿ‡ฏ๐Ÿ‡ฒ Jamaica
๐Ÿ‡ฏ๐Ÿ‡ช Jersey
๐Ÿ‡ฏ๐Ÿ‡ด Jordan
๐Ÿ‡ฝ๐Ÿ‡ฐ Kosovo
๐Ÿ‡ฝ๐Ÿ‡ฐ Kosovo
๐Ÿ‡ฐ๐Ÿ‡ฟ Kazakhstan
๐Ÿ‡ฐ๐Ÿ‡ช Kenya
๐Ÿ‡ฐ๐Ÿ‡ฎ Kiribati
๐Ÿ‡ฐ๐Ÿ‡ต North Korea
๐Ÿ‡ฐ๐Ÿ‡ท South Korea
๐Ÿด Kurdistan
๐Ÿ‡ฐ๐Ÿ‡ผ Kuwait
๐Ÿ‡ฐ๐Ÿ‡ฌ Kyrgyzstan
๐Ÿ‡ฑ๐Ÿ‡ฆ Laos
๐Ÿ‡ฑ๐Ÿ‡ป Latvia
๐Ÿ‡ฑ๐Ÿ‡ง Lebanon
๐Ÿ‡ฑ๐Ÿ‡ธ Lesotho
๐Ÿ‡ฑ๐Ÿ‡ท Liberia
๐Ÿ‡ฑ๐Ÿ‡พ Libya
๐Ÿ‡ฑ๐Ÿ‡ฎ Liechtenstein
๐Ÿ‡ฑ๐Ÿ‡น Lithuania
๐Ÿ‡ฑ๐Ÿ‡บ Luxembourg
๐Ÿ‡ฒ๐Ÿ‡ด Macau
๐Ÿ‡ฒ๐Ÿ‡ฐ North Macedonia
๐Ÿ‡ฒ๐Ÿ‡ฌ Madagascar
๐Ÿ‡ฒ๐Ÿ‡ผ Malawi
๐Ÿ‡ฒ๐Ÿ‡พ Malaysia
๐Ÿ‡ฒ๐Ÿ‡ป Maldives
๐Ÿ‡ฒ๐Ÿ‡ฑ Mali
๐Ÿ‡ฒ๐Ÿ‡น Malta
๐Ÿ‡ฒ๐Ÿ‡ญ Marshall Islands
๐Ÿ‡ฒ๐Ÿ‡ถ Martinique
๐Ÿ‡ฒ๐Ÿ‡ท Mauritania
๐Ÿ‡ฒ๐Ÿ‡บ Mauritius
๐Ÿ‡พ๐Ÿ‡น Mayotte
๐Ÿ‡ซ๐Ÿ‡ฒ Micronesia
๐Ÿ‡ฒ๐Ÿ‡ฉ Moldova
๐Ÿ‡ฒ๐Ÿ‡จ Monaco
๐Ÿ‡ฒ๐Ÿ‡ณ Mongolia
๐Ÿ‡ฒ๐Ÿ‡ช Montenegro
๐Ÿ‡ฒ๐Ÿ‡ธ Montserrat
๐Ÿ‡ฒ๐Ÿ‡ฆ Morocco
๐Ÿ‡ฒ๐Ÿ‡ฟ Mozambique
๐Ÿ‡ฒ๐Ÿ‡ฒ Myanmar
๐Ÿ‡ณ๐Ÿ‡ฆ Namibia
๐Ÿ‡ณ๐Ÿ‡ท Nauru
๐Ÿ‡ณ๐Ÿ‡ต Nepal
๐Ÿ‡ง๐Ÿ‡ถ Caribbean Netherlands
๐Ÿ‡ณ๐Ÿ‡จ New Caledonia
๐Ÿ‡ณ๐Ÿ‡ฎ Nicaragua
๐Ÿ‡ณ๐Ÿ‡ช Niger
๐Ÿ‡ณ๐Ÿ‡ฌ Nigeria
๐Ÿ‡ณ๐Ÿ‡บ Niue
๐Ÿ‡ณ๐Ÿ‡ซ Norfolk Island
๐Ÿ‡ฒ๐Ÿ‡ต Northern Mariana Islands
๐Ÿ‡ณ๐Ÿ‡ด Norway
๐Ÿ‡ด๐Ÿ‡ฒ Oman
๐Ÿ‡ต๐Ÿ‡ธ Palestine
๐Ÿ‡ต๐Ÿ‡ฐ Pakistan
๐Ÿ‡ต๐Ÿ‡ผ Palau
๐Ÿ‡ต๐Ÿ‡ฆ Panama
๐Ÿ‡ต๐Ÿ‡ฌ Papua New Guinea
๐Ÿ‡ต๐Ÿ‡พ Paraguay
๐Ÿ‡ต๐Ÿ‡ช Peru
๐Ÿ‡ต๐Ÿ‡ญ Philippines
๐Ÿ‡ต๐Ÿ‡ณ Pitcairn Island
๐Ÿ‡ต๐Ÿ‡ซ Polynesia
๐Ÿ‡ต๐Ÿ‡ท Puerto Rico
๐Ÿ‡ถ๐Ÿ‡ฆ Qatar
๐Ÿ‡ท๐Ÿ‡ช Reunion
๐Ÿ‡ท๐Ÿ‡ด Romania
๐Ÿ‡ท๐Ÿ‡ผ Rwanda
๐Ÿ‡ธ๐Ÿ‡ญ Saint Helena
๐Ÿ‡ฐ๐Ÿ‡ณ Saint Kitts and Nevis
๐Ÿ‡ฑ๐Ÿ‡จ Saint Lucia
๐Ÿ‡ต๐Ÿ‡ฒ Saint Pierre and Miquelon
๐Ÿ‡ป๐Ÿ‡จ Saint Vincent and the Grenadines
๐Ÿ‡ผ๐Ÿ‡ธ Samoa
๐Ÿ‡ธ๐Ÿ‡ฒ San Marino
๐Ÿ‡ธ๐Ÿ‡น Sao Tome and Principe
๐Ÿ‡ธ๐Ÿ‡ฆ Saudi Arabia
๐Ÿ‡ธ๐Ÿ‡ณ Senegal
๐Ÿ‡ท๐Ÿ‡ธ Serbia
๐Ÿ‡ธ๐Ÿ‡จ Seychelles
๐Ÿ‡ธ๐Ÿ‡ฑ Sierra Leone
๐Ÿ‡ฒ๐Ÿ‡ซ Saint-Martin
๐Ÿ‡ธ๐Ÿ‡ฝ Sint Maarten
๐Ÿ‡ธ๐Ÿ‡ฐ Slovakia
๐Ÿ‡ธ๐Ÿ‡ฎ Slovenia
๐Ÿ‡ธ๐Ÿ‡ง Solomon Islands
๐Ÿ‡ธ๐Ÿ‡ด Somalia
๐Ÿ‡ฟ๐Ÿ‡ฆ South Africa
๐Ÿ‡ฌ๐Ÿ‡ธ South Georgia and the South Sandwich Islands
๐Ÿ‡ธ๐Ÿ‡ธ South Sudan
๐Ÿ‡ฑ๐Ÿ‡ฐ Sri Lanka
๐Ÿ‡ธ๐Ÿ‡ฉ Sudan
๐Ÿ‡ธ๐Ÿ‡ท Suriname
๐Ÿ‡ธ๐Ÿ‡ฏ Svalbard and Jan Mayen Islands
๐Ÿ‡ธ๐Ÿ‡ฟ Swaziland
๐Ÿ‡จ๐Ÿ‡ญ Switzerland
๐Ÿ‡ธ๐Ÿ‡พ Syria
๐Ÿ‡น๐Ÿ‡ผ Taiwan
๐Ÿ‡น๐Ÿ‡ฏ Tajikistan
๐Ÿ‡น๐Ÿ‡ฟ Tanzania
๐Ÿ‡น๐Ÿ‡ฌ Togo
๐Ÿ‡น๐Ÿ‡ฐ Tokelau
๐Ÿ‡น๐Ÿ‡ด Tonga
๐Ÿ‡น๐Ÿ‡น Trinidad and Tobago
๐Ÿ‡น๐Ÿ‡ณ Tunisia
๐Ÿ‡น๐Ÿ‡ท Turkey
๐Ÿ‡น๐Ÿ‡ฒ Turkmenistan
๐Ÿ‡น๐Ÿ‡จ Turks and Caicos Islands
๐Ÿ‡น๐Ÿ‡ป Tuvalu
๐Ÿ‡บ๐Ÿ‡ฌ Uganda
๐Ÿ‡บ๐Ÿ‡พ Uruguay
๐Ÿ Hawaii
๐Ÿ‡บ๐Ÿ‡ฒ USA Minor Outlying Islands
๐Ÿ‡บ๐Ÿ‡ฟ Uzbekistan
๐Ÿ‡ป๐Ÿ‡บ Vanuatu
๐Ÿ‡ป๐Ÿ‡ฆ Vatican City
๐Ÿ‡ป๐Ÿ‡ช Venezuela
๐Ÿ‡ป๐Ÿ‡ณ Vietnam
๐Ÿ‡ป๐Ÿ‡ฌ British Virgin Islands
๐Ÿ‡ป๐Ÿ‡ฎ United States Virgin Islands
๐Ÿ‡ผ๐Ÿ‡ซ Wallis and Futuna Islands
๐Ÿ‡ช๐Ÿ‡ญ Western Sahara
๐Ÿ‡พ๐Ÿ‡ช Yemen
๐Ÿ‡ฟ๐Ÿ‡ฒ Zambia
๐Ÿ‡ฟ๐Ÿ‡ผ Zimbabwe
Apply for this job
๐Ÿ’ต Salary
๐ŸŽช Benefits
๐Ÿณ Docker Remove this filter
๐Ÿงฉ Elasticsearch Remove this filter
๐Ÿค“ Engineering Remove this filter
โŒ Clear 6 results
Nomad Insurance by SafetyWing
Global health coverage for remote workers and nomads

Ad


Constant Contact


๐Ÿ’ฐ $38k - $78k*

Architect

 

Software

Cloud

API

Marketing

Apache

Constant Contact

Apply now

๐Ÿ‘€ 581 views

โœ… 16 applied (3%)

Share this job:
Get a rok.co short link

Constant Contact is hiring a

Remote Principal Data Architect Customer Data Platform

\nWe have an opening for a Principal Data Engineer/Architect to collaborate with our Data Platform and Segmentation teams to build solutions that improve our ability to process and leverage data and improve our data integration velocity. As a Data Engineer/Architect, you will partner with various teams to develop business requirements to standardize the business input and develop a long-term vision. You'll play a pivotal role working with other Data Engineers who build and maintain the data pipelines and data lake that enable and accelerate Data Science, Machine Learning, and AI, as well as the engine for real-time segmentation and marketing automation within Constant Contact.\n\nWhat youโ€™ll do: \n\n\n* Work closely with Data Science/ML/AI teams to leverage and provide access to the vast data available for data-driven marketing insights for our customers\n\n* Work with cross-functional teams, define data strategies and leverage the latest technologies in data processing and data analytics\n\n* Design, implement, and build data models and pipelines that deliver data with measurable quality under the service level agreement\n\n* Design, develop, and deliver improvements to Constant Contact data integration practices, data analytics, and real-time stream processing\n\n* Work with the teams and stakeholders to scope and prioritize solutions\n\n* Establish rigorous engineering processes to ensure service quality, delivery of new capabilities, and continuously improve metrics.\n\n\n\n\nWho you are:\n\n\n* 8+ years experience in business analytics, data science, software development, data modeling, and/or data engineering work\n\n* 3+ years of experience creating high-quality data pipelines\n\n* Proficiency in Java, Python, and SQL\n\n* Strong understanding of OLAP concepts with experience with OLAP technologies such as ClickHouse, Druid, Pinot, or similar platforms\n\n* Familiarity with search technologies such as Elasticsearch for high performance real-time applications\n\n* Experience orchestrating data pipelines with technologies such as Airflow, Dagster, and/or NiFi\n\n* Familiarity with stream-processing frameworks such as Apache Flink\n\n* Experience with AWS cloud services including, but not limited to, Kinesis, Glue, S3, Lambda, API Gateway, DynamoDB, and Athena\n\n* Experience with Docker and Kubernetes \n\n* Certification in AWS Cloud (AWS Certified Solutions Architect or similar)\n\n\n\n#LI-HK1 #LI-Remote \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to Docker, Cloud, API and Marketing jobs that are similar:\n\n $37,500 — $77,500/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
\n\n#Location\nWaltham, Massachusetts, United States
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Thrive Market


๐ŸŒ Worldwide
๐Ÿ’ฐ $60k - $100k*

Manager

 

Jira

 

System

Security

Architect

Support

Software

DevOps

Cloud

Management

Lead

Health

Thrive Market

Apply now

๐Ÿ‘€ 893 views

โœ… 28 applied (3%)

Share this job:
Get a rok.co short link

Thrive Market is hiring a

Remote Manager DevOps Engineering

\nABOUT THRIVE MARKET \n\n\nThrive Market was founded in 2014 with a mission to make healthy and sustainable living easy and affordable for everyone. As an online, membership-based market, we deliver the highest quality healthy, and sustainable products at member-only prices. Every day, we help our 1.4M+ Members find better products, support better brands, and build a better world in the process. We are a profitable, half-billion-dollar revenue business proving that mission-focused companies can succeed. We are also a Certified B Corporation, recently became a Public Benefit Corporation, and are a Climate Neutral Certified company. Join us as we bring healthy and sustainable living to millions of Americans in the years to come.\n\n\nTHE ROLE  \n\n\nAt Thrive Market, our Platform team is in a constant state of innovation, crafting highly-performant and scalable services that empower our product teams to create exceptional customer experiences. Here, you will find the autonomy to drive critical initiatives that not only advance our mission but also propel our platform to new heights of excellence. We are seeking an experienced, hands-on DevOps Manager to lead our DevOps team. This person will drive our cloud infrastructure and the continuous delivery of our applications/services. The ideal candidate will have a strong background in software development, cloud infrastructure, and service operations, with a passion for empowering developers, streamlining processes, and automating away toil. \n\n\nIf you have read The Phoenix Project, Accelerate, The DevOps Handbook, and/or the Google SRE book, you will fit right in.\n\n\n\nRESPONSIBILITIES: \n* Team Leadership:\n* Manage, mentor, and grow a team of DevOps engineers. \n* Foster a collaborative and high-performance culture of continuous improvement within the team. \n* Level up the team through cross-training and strategic task assignments, continually challenging them to raise the bar. \n* Conduct regular performance reviews and provide feedback. \n* Work closely with development, QA, and IT teams to ensure smooth and reliable operation of software and systems. \n* Facilitate communication between teams and stakeholders. \n\n\n\n* Continuous Delivery:\n* Implement and manage continuous integration/continuous deployment (CI/CD) pipelines for the company. \n* Identify and remove bottlenecks in the software delivery process. \n* Promote best practices for software deployment. \n\n\n\n* Infrastructure Management:\n* Architect, improve, and administer AWS cloud infrastructure and services. \n* Leverage AWS-managed services wherever possible. \n* Create automated orchestration and deployment solutions using Puppet, Terraform, Docker, and Kubernetes. \n* Write and support scripts and automation using Python, Ruby, Bash, JavaScript, and Java. \n* Manage and optimize large-scale, cloud-hosted MySQL, Postgres, Redis, and Elasticsearch DBs.\n* Lead efforts for disaster recovery, capacity expansion, and system upgrades. \n* Oversee the provisioning, configuration, and monitoring of infrastructure. \n* Ensure system reliability, availability, and performance. \n\n\n\n* Automation: \n* Encourage DevOps engineers and application developers to streamline processes and automate their work whenever possible. \n* Develop and maintain automation for infrastructure provisioning, configuration management, and deployment. \n* Advocate for and implement automation in all aspects of the software lifecycle.\n\n\n\n* Security and Compliance: \n* Conduct security audits, vulnerability assessments, and system hardening initiatives, including maintaining PCI and SOX compliance. \n* Ensure that systems and processes adhere to industry best practices for security and compliance. \n\n\n\n* Monitoring and Incident Management:\n* Implement and manage monitoring tools to ensure system health and performance. \n* Lead incident response efforts and post-incident reviews to learn from failures and mitigate and prevent future occurrences. \n\n\n\n* Project Management: \n* Manage JIRA ticket creation, grooming, ticket/epic management, and documentation and keep it up to date.\n\n\n\nQUALIFICATIONS:\n* Bachelorโ€™s degree in Computer Science, Engineering, or a related field (or equivalent experience).\n* 8+ years of experience in DevOps, SRE, system administration, or software development.\n* 2+ years of experience in a leadership or managerial role. \n* Experience in a startup environment and larger, scaled organizations is a plus!\n* Extensive experience building and maintaining complex cloud infrastructure on AWS. AWS Certified - AWS Solutions Architect or similar is a plus. \n* Extensive experience in a DevOps/SRE/Systems Engineering role, with strong experience developing applications in one of the following: Python, Ruby, Groovy, Go. \n* Strong experience in managing Linux-based infrastructure, preferably Debian/Ubuntu.\n* Deep knowledge and experience with Docker and Kubernetes.\n* Experience using and maintaining IaC/config management systems and deployment tools, including Terraform, Puppet, and Ansible.\n* Experience troubleshooting production problems and leading multiple teams to resolve large-scale production issues.\n* Proficiency with continuous integration and deployments using Jenkins, Concourse, and GitLab CI; knowledge of A/B, in-place, rolling, and phased deployment methodologies.\n* Understanding of monitoring and systems tools likeLogstash, Grafana, Prometheus, New Relic, etc.\n* Good understanding of networking fundamentals and protocols.\n* Basic understanding of storage architectures and design.\n* Good critical thinking and problem-solving skills.\n* Sense of ownership and pride in your performance and its impact on the organization's success.\n* Effective interpersonal and communication skills - we work cohesively and want to bring in folks who seek to drive unity.\n* Ability to independently lead the team and execute projects promptly. \n* Proficiency with Atlassian Jira and Confluence for project management and documentation is a plus.\n* Curious, hungry, proactive, results-oriented, and data-driven; thrives in fast-paced, team-oriented environments.\n\n\n\nBELONG TO A BETTER COMPANY:\n* Comprehensive health benefits (medical, dental, vision, life and disability)\n* Competitive salary (DOE) + equity\n* 401k plan\n* 9 Days of Observed Holiday\n* Flexible Paid Time Off\n* Subsidized ClassPass Membership with access to fitness classes and wellness and beauty experiences\n* Ability to work in our beautiful co-working space at WeWork in Playa Vista and other locations\n* Free Thrive Market membership with exclusive employee discount\n* Coverage for Life Coaching & Therapy Sessions on our holistic mental health and well-being platform\n\n\nWe're a community of more than 1 Million + members who are united by a singular belief: It should be easy to find better products, support better brands, make better choices, and build a better world in the process.\nAt Thrive Market, we believe in building a diverse, inclusive, and authentic culture. If you are excited about this role along with our mission and values, we encourage you to apply.\nThrive Market is an EEO/Veterans/Disabled/LGBTQ employer\nAt Thrive Market, our goal is to be a diverse and inclusive workplace that is representative, at all job levels, of the members we serve and the communities we operate in. Weโ€™re proud to be an inclusive company and an Equal Opportunity Employer and we prohibit discrimination and harassment of any kind. We believe that diversity and inclusion among our teammates are critical to our success as a company, and we seek to recruit, develop, and retain the most talented people from a diverse candidate pool. If youโ€™re thinking about joining our team, we expect that you would agree!\nIf you need assistance or accommodation due to a disability, please email us at [email protected] and weโ€™ll be happy to assist you.\nEnsure your Thrive Market job offer is legitimate and don't fall victim to fraud. Thrive Market never seeks payment from job applicants. Thrive Market recruiters will only reach out to applicants from an @thrivemarket.com email address. For added security, where possible, apply through our company website at www.thrivemarket.com.\nยฉ Thrive Market 2024 All rights reserved.\n\nJOB INFORMATION:\n* Compensation Description - The base salary range for this position is $175,000 - $225,000/Per Year. \n* Compensation may vary outside of this range depending on several factors, including a candidateโ€™s qualifications, skills, competencies and experience, and geographic location. \n* Total Compensation includes Base Salary, Stock Options, Health & Wellness Benefits, Flexible PTO, and more! \n\n\n\n\n\n#LI-DR1 \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to Docker, DevOps and Cloud jobs that are similar:\n\n $60,000 — $100,000/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
\n\n#Location\nLos Angeles or Remote
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

verified closed

TheyDo

 This job is getting a high amount of applications right now (29% of viewers clicked Apply)

๐Ÿ‡ช๐Ÿ‡บ Europe
๐Ÿ’ฐ $80k - $130k

JavaScript

 

Node

Senior

Engineer

Backend

GraphQL

Redis

Quality Assurance

Node JS

SaaS

Health

Full-Stack

TheyDo

theydo.io

๐Ÿ‘€ 3,579 views

โœ… 1,040 applied (29%)

Share this job:
Get a rok.co short link

This job post is closed and the position is probably filled. Please do not apply. Work for TheyDo and want to re-open this job? Use the edit link in the email when you posted the job!


# About you\nWeโ€™re hiring a Senior Backend Engineer to join our founding team. You will be working closely with our CTO, Charles, to shape both the team and platform while we get ready for scale.\n\nYou like to get things right, a **pragmatic perfectionist** who will continuously shape our application architecture and make it ready to scale. You understand the right balance between code readability, simplicity, development speed, performance, and maintainability.\n\nYou're well-acquainted with typed NodeJS codebases and preferably these technologies: GraphQL, Apollo, Postgres, Redis, ElasticSearch, Docker, AWS, websockets, microservices, event-driven architecture.\n\n# About TheyDo\nTheyDo is the first B2B SaaS platform that allows organizations to redefine cross-team collaboration around the customer journey. It is journey management, the product management way. We help teams make sense of a complex data graph and connect it with various data sources. Our users are design-savvy and we strive to make a highly polished and performant experience for them.\n\nWe're a passionate team from ๐Ÿ‡ณ๐Ÿ‡ฑ๐Ÿ‡ต๐Ÿ‡ฑ๐Ÿ‡บ๐Ÿ‡ฆ๐Ÿ‡ธ๐Ÿ‡ช. Founded in 2019, TheyDo has raised $2M+ from top investors to start a movement. We are about to double our team and get our product ready for scale while we are onboarding customers across all continents.\n\nWe're on a mission to help organisations scale Journey Management. Today, everyone is in the Experience business; here, we help our customers to make better and faster customer-centric decisions across the entire customer experience. Thanks to TheyDo, everyone agrees, including the customer.\n\nRead more on our [website](https://www.theydo.io).\n\n# Your assignment\nYour top priority is shaping the architecture of our product and getting it ready for scale. You'll work on the technically ambitious projects we have planned. Some examples currently on our roadmap:\n\n* ๐Ÿ”Œ Realizing integrations with a wide ecosystem - Miro, Jira, Google Analytics, etc.\n* ๐Ÿ“ˆ Implementing microservices and extend our event-driven architecture.\n* โฑ๏ธ Enabling version control on all user data.\n* โšก Improving real-time collaborative functionality. Using fractional indexing, last-writer wins and other techniques to provide a superior user experience.\n\nAs a founding team member, you will get a chance to set the foundations of our engineering culture. You will help articulate our engineering principles and help set the long-term roadmap.\n\n# We're looking for\n* An ambitious engineer with several years of experience working on back-end architecture and design. Previous experience at a scaled product is a big plus.\n* An engineer who wants to be at the foundation of a fast-growing team.\n* A product-minded engineer that wants to understand how people use our product and why.\n* An asynchronous worker who organises and documents their work.\n* A clean coder who writes well-structured and maintainable code.\n\n# What we offer\n* Remote position, for 4-5 days per week, across flexible working hours.\n* Collaborate with zealous colleagues having 20+ years of experience working in the field.\n* A unique opportunity to shape a product and our growing team.\n* Regular off-sites/company outings with the TheyDo team.\n* Competitive compensation and equity package.\n* As many vacation days as you need, we expect you to take at least 25.\n* Professional development reimbursement.\n* Mental health and wellness reimbursement.\n* Paid parental leave.\n* Home office & technology reimbursement.\n\nTo summarise, we value work-life harmony backed by personal freedom under responsibility. Sounds like fun? We're looking forward to having you join our team. โœจ๐Ÿ™\n\n# Our engineering team\nThe engineering team consist of: a CTO, three full-stack engineers, one back-end engineer, and one QA tester. We aim for a relaxed environment within the ambitious goals we have for our product.\n\nOur server is fully typed and built using NodeJS, Apollo, Redis, Postgres, ElasticSearch and more modern technologies. Our web application is also typed and uses VueJS, Apollo, WebSockets, and more. Other tooling currently includes AWS, Storybook, Cypress, Jest, Stripe, and WorkOS.\n\nA typical day at the office for an engineer includes; flexibility to organise your own time, no set hours, ample time for deep work, as few mandatory meetings as possible, plenty of pair programming with team members to get your code just right, reviewing pull requests, and running around in our virtual office.\n\nView our team members [here](https://www.theydo.io/about-us).\n\n# Our culture\nTheyDo's culture is 'Do' rather than 'Talk'. Better ask for forgiveness instead of permission, no one will be accused of trying. We try to keep things simple because complexity slows us down.\n\nIt's not about the time spent, but the outcome achieved. It's up to everyone to map, plan and interact in the best way to get the most out of their day, week, and sprint. Always with an open mindset because we never know when and where the next great idea will surface.\n\nBeing remote we nourish and cherish connectivity, so no one feels alone or left out. We don't have long lines of communication or decisions because hierarchies and silos are part of the past and we love to shape the future. In our virtual office, you can just walk up to your team to have a quick chat, get work done or simply say hello. We motivate everyone to find their own work/life balance. Whether you choose to work asynchronously or synchronously it's up to you as long it fits you and your team.\n\nTheyDo is an equal employer treating everyone as equals. We value diversity and individuality. We think long term and strive to hire the best match for each role, no matter your background. \n\nPlease mention the words **SHIELD TOPIC SLIDE** when applying to show you read the job post completely (#RMTguMjIyLjEwNi45Mw==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Salary and compensation\n $80,000 — $130,000/year\n
\n\n#Benefits\n โฐ Async\n\n
\n\n#Location\nEurope
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
verified closed

Shopify


๐Ÿ’ฐ $0k - $0k

Cloud

 

Ruby

Senior

Golang

Engineer

Redis

Developer

Sales

Reliability

Shopify

This job post is closed and the position is probably filled. Please do not apply. Work for Shopify and want to re-open this job? Use the edit link in the email when you posted the job!


**Company Description**\n\nShopify is the leading omni-channel commerce platform. Merchants use Shopify to design, set up, and manage their stores across multiple sales channels, including mobile, web, social media, marketplaces, brick-and-mortar locations, and pop-up shops. The platform also provides merchants with a powerful back-office and a single view of their business, from payments to shipping. The Shopify platform was engineered for reliability and scale, making enterprise-level technology available to businesses of all sizes. Headquartered in Ottawa, Canada, Shopify currently powers over 1,000,000 businesses in approximately 175 countries and is trusted by brands such as Allbirds, Gymshark, PepsiCo, Staples, and many more.\n\nAre you looking for an opportunity to work on planet-scale infrastructure? Do you want your work to impact thousands of developers and millions of customers? Do you enjoy tackling complex problems, and learning through experimentation? Shopify has all this and more.\n\nThe infrastructure teams build and maintain Shopifyโ€™s critical infrastructure through software and systems engineering. We make sure Shopifyโ€”the worldโ€™s fastest growing commerce platformโ€”stays reliable, performant, and scalable for our 2000+ member development team to build on, and our 1.7 million merchants to depend on.\n\n**Job Description**\n\nOur team covers the disciplines of site reliability engineering and infrastructure engineering, all to ensure Shopifyโ€™s infrastructure is able to scale massively while staying resilient.\n\nOn our team, youโ€™ll get to work autonomously on engaging projects in an area youโ€™re passionate about. Not sure what interests you most? Here are some of the things you could work on:\n\n* Build on top of one of the largest Kubernetes deployments in Google Cloud (we are operating a fleet of over 50+ clusters)\n* Collaborate with other Shopify developers to understand their needs and ensure our team works on the right things\n* Maintain Shopifyโ€™s Heroku-style self-service PaaS for our developers to consolidate over 400 production services\n* Help build our own Database as a Service layers, which include features such as transparent load balancing proxies and automatic failovers, using the current best-of-breed technologies in the area\n* Help develop our caching infrastructure and advise Shopify developers on effective use of the caching layers\n* Build tooling that delights Shopify developers and allows them to make an impact quickly\n* Work as part of the engineering team to build and scale distributed, multi-region systems\n* Investigate and resolve production issues\n* Build languages, frameworks and libraries to support our systems\n* Build Shopifyโ€™s predictable, scalable, and high performing full text search infrastructure\n* Build and support infrastructure and tooling to protect our platform from bots and DDoS attacks\n* Autoscale compute up and down based on the demands of the platform, and further protect the platform by shedding lower priority requests as the load gets high\n* And plenty more!\n\n**We also understand the importance of sharing our work back to the developer community:**\n\n* Ghostferry: an open source cross cloud, multipurpose database migration tool and library\n* Services DB: A platform to manage services across various runtime environments\n* Shipit: Our open-source deployment tool\n* Capturing Every Change From Shopifyโ€™s Sharded Monolith\n* Read consistency with database replicas\n\n**Qualifications**\nSome of the technology that the team uses: Ruby, Rails, Go, Kubernetes, MySQL, Redis, Memcached, Docker, CI Pipelines, Kafa, ElasticSearch, Google Cloud.\n\nIs some of this tech new to you? Thatโ€™s OK! We know not everyone will come in fully familiar with this stack, and we provide support to learn on the job.\n\n**Additional information**\n\nOur teams are distributed remotely across North America, and European timezones.\n\nWe know that applying to a new role takes a lot of work and we truly value your time. Weโ€™re looking forward to reading your application.\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous peoples, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.\n\n \n\nPlease mention the words **SALAD PALM DOLL** when applying to show you read the job post completely (#RMTguMjIyLjEwNi45Mw==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nUnited States, Canada
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
verified closed

Splitgraph


๐ŸŒ Worldwide
๐Ÿ’ฐ $0k - $0k

JavaScript

 

React

 

C

C Plus Plus

Python

Senior

Engineer

Nginx

GraphQL

Grafana

Front End

Developer

Front End

Video

Cloud

MySQL

API

Analytics

SaaS

Health

Backend

Splitgraph

splitgraph.com

๐Ÿ‘€ 4,755 views

โœ… 346 applied (7%)

Share this job:
Get a rok.co short link

This job post is closed and the position is probably filled. Please do not apply. Work for Splitgraph and want to re-open this job? Use the edit link in the email when you posted the job!


# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ†’ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ† (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected]) \n\nPlease mention the words **DESERT SPELL GOWN** when applying to show you read the job post completely (#RMTguMjIyLjEwNi45Mw==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
verified

Prominent Edge


๐Ÿ’ฐ $0k - $0k

DevOps

 

Serverless

Python

Engineer

Executive

Front End

Security

Amazon

Cloud

API

Travel

Lead

Linux

Prominent Edge

prominentedge.com

Apply now

๐Ÿ‘€ 3,417 views

โœ… 178 applied (5%)

Share this job:
Get a rok.co short link

Also hiring:

Prominent Edge is hiring a

Remote Lead DevOps Engineer

We are looking for a Lead DevOps engineer to join our team at Prominent Edge. We are a small, stable, growing company that believes in doing things right. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want engineers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Many of our projects are web applications which often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com/ for more information and apply through https://prominentedge.com/careers.\n\nRequired skills:\n* Experience as a Lead Engineer.\n* Minimum of 8 years of total experience to include a minimum of 1 years of web or software development experience.\n* Experience automating the provisioning of environments by designing, implementing, and managing configuration and deployment infrastructure as code solutions.\n* Experience delivering scalable solutions utilizing Amazon Web Services: EC2, S3, RDS, Lambda, API Gateway, Message Queues, and CloudFormation Templates.\n* Experience with deploying and administering kubernetes on AWS or GCP or Azure.\n* Capable of designing secure and scalable solutions.\n* Strong nix administration skills.\n* Development in a Linux environment using Bash, Powershell, Python, JS, Go, or Groovy\n* Experience automating and streamlining build, test, and deployment phases for continuous integration\n* Experience with automated deployment technologies such as Ansible, Puppet, or Chef\n* Experience administering automated build environments such as Jenkins and Hudson\n* Experience configuring and deploying logging and monitoring services - fluentd, logstash, GeoHashes, etc.\n* Experience with Git/GitHub/GitLab.\n* Experience with DockerHub or a container registry.\n* Experience with building and deploying containers to a production environment.\n* Strong knowledge of security and recovery from a DevOps perspective.\n\nBonus skills:\n* Experience with RabbitMQ and administration.\n* Experience with kops.\n* Experience with HashiCorp Vault, administration, and Goldfish; frontend Vault UI.\n* Experience with helm for deployment to kubernetes.\n* Experience with CloudWatch.\n* Experience with Ansible and/or a configuration management language.\n* Experience with Ansible Tower; not necessary.\n* Experience with VPNs; OpenVPN preferable.\n* Experience with network administration and understanding network topology and architecture.\n* Experience with AWS spot instances or Google preemptible.\n* Experience with Grafana administration, SSO (okta or jumpcloud preferable), LDAP / Active Directory administration, CloudHealth or cloud cost optimization.\n* Experience with kubernetes-based software - example - heptio/ark, ingress-nginx, anchore engine.\n* Familiarity with the ELK Stack\n* Familiarity with basic administrative tasks and building artifacts on Windows\n* Familiarity with other cloud infrastructures such as Cloud Foundry\n* Strong web or software engineering experience\n* Familiarity with security clearances in case you contribute to our non-commercial projects.\n\nW2 Benefits:\n* Not only you get to join our team of awesome playful ninjas, we also have great benefits:\n* Six weeks paid time off per year (PTO+Holidays).\n* Six percent 401k matching, vested immediately.\n* Free PPO/POS healthcare for the entire family.\n* We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.\n* Want to take time off without using vacation time? Shuffle your hours around in any pay period.\n* Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, weโ€™ll buy you the new version whenever you want.\n* Want some training or to travel to a conference that is relevant to your job? We offer that too!\n* This organization participates in E-Verify.\n\nAbout You:\n* You believe in and practice Agile/DevOps.\n* You are organized and eager to accept responsibility.\n* You want a seat at the table at the inception of new efforts; you do not want things "thrown over the wall" to you.\n* You are an active listener, empathetic and willing to understand and internalize the unique needs and concerns of each individual client.\n* You adjust your speaking style for your audience and can interact successfully with both technical and non-technical clients.\n* You are detail-oriented but never lose sight of the Big Picture.\n* You can work equally well individually or as part of a team.\n* U.S. citizenship required\n\n\n \n\nPlease mention the words **RIPPLE DESK VERSION** when applying to show you read the job post completely (#RMTguMjIyLjEwNi45Mw==). This is a feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.\n\n \n\n#Location\nUnited States
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

434ms