find a remote job
work from anywhere

Get new remote Analytics + Engineer + Linux + Technical jobs sent to

Subscribe
×
๐Ÿ‘ฉโ€๐Ÿ’ป Join Remote OK ๐Ÿ‘‹  Log in
General
Remote OK Frontpage ๐Ÿ Remote jobs ๐ŸŒ—  Dark mode ๐Ÿ‘ฉโ€๐Ÿ’ป Hire remote workers ๐Ÿšจ Post a remote job ๐Ÿฑ Compact mode โœ๏ธ Remote work blog new
Top jobs
๐Ÿฆพ  AI Jobs
โฐ Async jobs ๐ŸŒŽ Distributed team ๐Ÿค“ Engineer jobs ๐Ÿ’ผ Executive jobs ๐Ÿ‘ต Senior jobs ๐Ÿค“ Developer jobs ๐Ÿ’ฐ Finance jobs โ™พ๏ธ Sys Admin jobs โ˜•๏ธ JavaScript jobs ๐Ÿ‘ Backend jobs
Companies
๐Ÿšจ Post a remote job ๐Ÿ“ฆ Buy a job bundle ๐Ÿท Ask for a discount Safetywing Health insurance for teams Safetywing Health insurance for nomads
Feeds
๐Ÿ›  Remote Jobs API ๐Ÿชš  RSS feed ๐Ÿช“  JSON feed

Hacker News mode  Hacker News mode

Safe for work mode  Safe for work mode

Help
๐Ÿ’ก  Ideas + bugs ๐Ÿš€  Changelog ๐Ÿ›๏ธ  Merch ๐Ÿ›Ÿ  FAQ & Help
Other projects
๐Ÿ“Š Remote work stats new ๐Ÿ‘ท Top remote companies ๐Ÿ’ฐ Highest paying remote jobs ๐Ÿงช State of remote work new
๐ŸŒ  Become a digital nomad
๐Ÿ”ฎ  Web3 Jobs
๐Ÿ“ธ  Photo AI
๐Ÿก  Interior AI
Post a remote job โ†’ Log in

๐Ÿ‘‰ Hiring for a Remote Analytics + Engineer + Linux + Technical position?

Post a job
on the ๐Ÿ† #1 Remote Jobs board.
Minimum
$0k/year
๐Ÿ’ฐ 401(k)
๐ŸŒŽ Distributed team
โฐ Async
๐Ÿค“ Vision insurance
๐Ÿฆท Dental insurance
๐Ÿš‘ Medical insurance
๐Ÿ– Unlimited vacation
๐Ÿ– Paid time off
๐Ÿ“† 4 day workweek
๐Ÿ’ฐ 401k matching
๐Ÿ” Company retreats
๐Ÿฌ Coworking budget
๐Ÿ“š Learning budget
๐Ÿ’ช Free gym membership
๐Ÿง˜ Mental wellness budget
๐Ÿ–ฅ Home office budget
๐Ÿฅง Pay in crypto
๐Ÿฅธ Pseudonymous
๐Ÿ’ฐ Profit sharing
๐Ÿ’ฐ Equity compensation
โฌœ๏ธ No whiteboard interview
๐Ÿ‘€ No monitoring system
๐Ÿšซ No politics at work
๐ŸŽ… We hire old (and young)
Regions
๐ŸŒ Worldwide
โ›ฐ๏ธ North America
๐Ÿ’ƒ Latin America
๐Ÿ‡ช๐Ÿ‡บ Europe
๐Ÿฆ Africa
๐Ÿ•Œ Middle East
โ›ฉ Asia
๐ŸŒŠ Oceania
Countries
๐Ÿ‡บ๐Ÿ‡ธ United States
๐Ÿ‡จ๐Ÿ‡ฆ Canada
๐Ÿ‡ฌ๐Ÿ‡ง United Kingdom
๐Ÿ‡ฆ๐Ÿ‡บ Australia
๐Ÿ‡ณ๐Ÿ‡ฟ New Zealand
๐Ÿ‡ฎ๐Ÿ‡ณ India
๐Ÿ‡ต๐Ÿ‡น Portugal
๐Ÿ‡ฉ๐Ÿ‡ช Germany
๐Ÿ‡ณ๐Ÿ‡ฑ Netherlands
๐Ÿ‡ธ๐Ÿ‡ฌ Singapore
๐Ÿ‡ซ๐Ÿ‡ท France
๐Ÿ‡ญ๐Ÿ‡ฐ Hong Kong
๐Ÿ‡ง๐Ÿ‡ท Brazil
๐Ÿ‡ฌ๐Ÿ‡ท Greece
๐Ÿ‡ฆ๐Ÿ‡ช United Arab Emirates
๐Ÿ‡ธ๐Ÿ‡ช Sweden
๐Ÿ‡ต๐Ÿ‡ฑ Poland
๐Ÿ‡ช๐Ÿ‡ธ Spain
๐Ÿ‡ฒ๐Ÿ‡ฝ Mexico
๐Ÿ‡บ๐Ÿ‡ฆ Ukraine
๐Ÿ‡ฏ๐Ÿ‡ต Japan
๐Ÿ‡น๐Ÿ‡ญ Thailand
๐Ÿ‡จ๐Ÿ‡ฟ Czechia
๐Ÿ‡ท๐Ÿ‡บ Russia
๐Ÿ‡ฎ๐Ÿ‡ฑ Israel
๐Ÿ‡ซ๐Ÿ‡ฎ Finland
๐Ÿ‡จ๐Ÿ‡ณ China
๐Ÿ‡ฎ๐Ÿ‡ฉ Indonesia
๐Ÿ‡ฆ๐Ÿ‡ซ Afghanistan
๐Ÿ‡ฆ๐Ÿ‡ฑ Albania
๐Ÿ‡ฉ๐Ÿ‡ฟ Algeria
๐Ÿ‡ฆ๐Ÿ‡ธ American Samoa
๐Ÿ‡ฆ๐Ÿ‡ฉ Andorra
๐Ÿ‡ฆ๐Ÿ‡ด Angola
๐Ÿ‡ฆ๐Ÿ‡ฎ Anguilla
๐Ÿ‡ฆ๐Ÿ‡ถ Antarctica
๐Ÿ‡ฆ๐Ÿ‡ฌ Antigua and Barbuda
๐Ÿ‡ฆ๐Ÿ‡ท Argentina
๐Ÿ‡ฆ๐Ÿ‡ฒ Armenia
๐Ÿ‡ฆ๐Ÿ‡ผ Aruba
๐Ÿ‡ฆ๐Ÿ‡น Austria
๐Ÿ‡ฆ๐Ÿ‡ฟ Azerbaijan
๐Ÿ‡ง๐Ÿ‡ธ The Bahamas
๐Ÿ‡ง๐Ÿ‡ญ Bahrain
๐Ÿ‡ง๐Ÿ‡ฉ Bangladesh
๐Ÿ‡ง๐Ÿ‡ง Barbados
๐Ÿ‡ง๐Ÿ‡พ Belarus
๐Ÿ‡ง๐Ÿ‡ช Belgium
๐Ÿ‡ง๐Ÿ‡ฟ Belize
๐Ÿ‡ง๐Ÿ‡ฏ Benin
๐Ÿ‡ง๐Ÿ‡ฒ Bermuda
๐Ÿ‡ง๐Ÿ‡น Bhutan
๐Ÿ‡ง๐Ÿ‡ด Bolivia
๐Ÿ‡ง๐Ÿ‡ฆ Bosnia
๐Ÿ‡ง๐Ÿ‡ผ Botswana
๐Ÿ‡ง๐Ÿ‡ป Bouvet Island
๐Ÿ‡ฎ๐Ÿ‡ด British Indian Ocean Territory
๐Ÿ‡ง๐Ÿ‡ณ Brunei
๐Ÿ‡ง๐Ÿ‡ฌ Bulgaria
๐Ÿ‡ง๐Ÿ‡ซ Burkina Faso
๐Ÿ‡ง๐Ÿ‡ฎ Burundi
๐Ÿ‡ฐ๐Ÿ‡ญ Cambodia
๐Ÿ‡จ๐Ÿ‡ฒ Cameroon
๐Ÿ‡จ๐Ÿ‡ป Cape Verde
๐Ÿ‡ฐ๐Ÿ‡พ Cayman Islands
๐Ÿ‡จ๐Ÿ‡ซ Central African Republic
๐Ÿ‡น๐Ÿ‡ฉ Chad
๐Ÿ‡จ๐Ÿ‡ฑ Chile
๐Ÿ‡จ๐Ÿ‡ฝ Christmas Island
๐Ÿ‡จ๐Ÿ‡จ Cocos Islands
๐Ÿ‡จ๐Ÿ‡ด Colombia
๐Ÿ‡ฐ๐Ÿ‡ฒ Comoros
๐Ÿ‡จ๐Ÿ‡ฌ Congo
๐Ÿ‡จ๐Ÿ‡ฉ DR Congo
๐Ÿ‡จ๐Ÿ‡ฐ Cook Islands
๐Ÿ‡จ๐Ÿ‡ท Costa Rica
๐Ÿ‡ญ๐Ÿ‡ท Croatia
๐Ÿ‡จ๐Ÿ‡บ Cuba
๐Ÿ‡จ๐Ÿ‡ผ Curaรงao
๐Ÿ‡จ๐Ÿ‡พ Cyprus
๐Ÿ‡ฉ๐Ÿ‡ฐ Denmark
๐Ÿ‡ฉ๐Ÿ‡ฏ Djibouti
๐Ÿ‡ฉ๐Ÿ‡ฒ Dominica
๐Ÿ‡ฉ๐Ÿ‡ด Dominican Republic
๐Ÿ‡ช๐Ÿ‡จ Ecuador
๐Ÿ‡ช๐Ÿ‡ฌ Egypt
๐Ÿ‡ธ๐Ÿ‡ป El Salvador
๐Ÿ‡ฌ๐Ÿ‡ถ Equatorial Guinea
๐Ÿ‡ช๐Ÿ‡ท Eritrea
๐Ÿ‡ช๐Ÿ‡ช Estonia
๐Ÿ‡ช๐Ÿ‡น Ethiopia
๐Ÿ‡ซ๐Ÿ‡ฐ Falkland Islands
๐Ÿ‡ซ๐Ÿ‡ด Faroe Islands
๐Ÿ‡ซ๐Ÿ‡ฏ Fiji
๐Ÿ‡ฌ๐Ÿ‡ซ French Guiana
๐Ÿ‡น๐Ÿ‡ฑ East Timor
๐Ÿ‡น๐Ÿ‡ซ French Southern Territories
๐Ÿ‡ฌ๐Ÿ‡ฆ Gabon
๐Ÿ‡ฌ๐Ÿ‡ฒ Gambia
๐Ÿ‡ฌ๐Ÿ‡ช Georgia
๐Ÿ‡ฌ๐Ÿ‡ญ Ghana
๐Ÿ‡ฌ๐Ÿ‡ฎ Gibraltar
๐Ÿ‡ฌ๐Ÿ‡ฑ Greenland
๐Ÿ‡ฌ๐Ÿ‡ฉ Grenada
๐Ÿ‡ฌ๐Ÿ‡ต Guadeloupe
๐Ÿ‡ฌ๐Ÿ‡บ Guam
๐Ÿ‡ฌ๐Ÿ‡น Guatemala
๐Ÿ‡ฌ๐Ÿ‡ฌ Guernsey
๐Ÿ‡ฌ๐Ÿ‡ณ Guinea
๐Ÿ‡ฌ๐Ÿ‡ผ Guinea Bissau
๐Ÿ‡ฌ๐Ÿ‡พ Guyana
๐Ÿ‡ญ๐Ÿ‡น Haiti
๐Ÿ‡ญ๐Ÿ‡ฒ Heard Island and McDonald Islands
๐Ÿ‡ญ๐Ÿ‡ณ Honduras
๐Ÿ‡ญ๐Ÿ‡บ Hungary
๐Ÿ‡ฎ๐Ÿ‡ธ Iceland
๐Ÿ‡ฎ๐Ÿ‡ท Iran
๐Ÿ‡ฎ๐Ÿ‡ถ Iraq
๐Ÿ‡ฎ๐Ÿ‡ช Ireland
๐Ÿ‡ฎ๐Ÿ‡ฒ Isle of Man
๐Ÿ‡ฎ๐Ÿ‡น Italy
๐Ÿ‡จ๐Ÿ‡ฎ Cote d'Ivoire
๐Ÿ‡ฏ๐Ÿ‡ฒ Jamaica
๐Ÿ‡ฏ๐Ÿ‡ช Jersey
๐Ÿ‡ฏ๐Ÿ‡ด Jordan
๐Ÿ‡ฝ๐Ÿ‡ฐ Kosovo
๐Ÿ‡ฝ๐Ÿ‡ฐ Kosovo
๐Ÿ‡ฐ๐Ÿ‡ฟ Kazakhstan
๐Ÿ‡ฐ๐Ÿ‡ช Kenya
๐Ÿ‡ฐ๐Ÿ‡ฎ Kiribati
๐Ÿ‡ฐ๐Ÿ‡ต North Korea
๐Ÿ‡ฐ๐Ÿ‡ท South Korea
๐Ÿด Kurdistan
๐Ÿ‡ฐ๐Ÿ‡ผ Kuwait
๐Ÿ‡ฐ๐Ÿ‡ฌ Kyrgyzstan
๐Ÿ‡ฑ๐Ÿ‡ฆ Laos
๐Ÿ‡ฑ๐Ÿ‡ป Latvia
๐Ÿ‡ฑ๐Ÿ‡ง Lebanon
๐Ÿ‡ฑ๐Ÿ‡ธ Lesotho
๐Ÿ‡ฑ๐Ÿ‡ท Liberia
๐Ÿ‡ฑ๐Ÿ‡พ Libya
๐Ÿ‡ฑ๐Ÿ‡ฎ Liechtenstein
๐Ÿ‡ฑ๐Ÿ‡น Lithuania
๐Ÿ‡ฑ๐Ÿ‡บ Luxembourg
๐Ÿ‡ฒ๐Ÿ‡ด Macau
๐Ÿ‡ฒ๐Ÿ‡ฐ North Macedonia
๐Ÿ‡ฒ๐Ÿ‡ฌ Madagascar
๐Ÿ‡ฒ๐Ÿ‡ผ Malawi
๐Ÿ‡ฒ๐Ÿ‡พ Malaysia
๐Ÿ‡ฒ๐Ÿ‡ป Maldives
๐Ÿ‡ฒ๐Ÿ‡ฑ Mali
๐Ÿ‡ฒ๐Ÿ‡น Malta
๐Ÿ‡ฒ๐Ÿ‡ญ Marshall Islands
๐Ÿ‡ฒ๐Ÿ‡ถ Martinique
๐Ÿ‡ฒ๐Ÿ‡ท Mauritania
๐Ÿ‡ฒ๐Ÿ‡บ Mauritius
๐Ÿ‡พ๐Ÿ‡น Mayotte
๐Ÿ‡ซ๐Ÿ‡ฒ Micronesia
๐Ÿ‡ฒ๐Ÿ‡ฉ Moldova
๐Ÿ‡ฒ๐Ÿ‡จ Monaco
๐Ÿ‡ฒ๐Ÿ‡ณ Mongolia
๐Ÿ‡ฒ๐Ÿ‡ช Montenegro
๐Ÿ‡ฒ๐Ÿ‡ธ Montserrat
๐Ÿ‡ฒ๐Ÿ‡ฆ Morocco
๐Ÿ‡ฒ๐Ÿ‡ฟ Mozambique
๐Ÿ‡ฒ๐Ÿ‡ฒ Myanmar
๐Ÿ‡ณ๐Ÿ‡ฆ Namibia
๐Ÿ‡ณ๐Ÿ‡ท Nauru
๐Ÿ‡ณ๐Ÿ‡ต Nepal
๐Ÿ‡ง๐Ÿ‡ถ Caribbean Netherlands
๐Ÿ‡ณ๐Ÿ‡จ New Caledonia
๐Ÿ‡ณ๐Ÿ‡ฎ Nicaragua
๐Ÿ‡ณ๐Ÿ‡ช Niger
๐Ÿ‡ณ๐Ÿ‡ฌ Nigeria
๐Ÿ‡ณ๐Ÿ‡บ Niue
๐Ÿ‡ณ๐Ÿ‡ซ Norfolk Island
๐Ÿ‡ฒ๐Ÿ‡ต Northern Mariana Islands
๐Ÿ‡ณ๐Ÿ‡ด Norway
๐Ÿ‡ด๐Ÿ‡ฒ Oman
๐Ÿ‡ต๐Ÿ‡ธ Palestine
๐Ÿ‡ต๐Ÿ‡ฐ Pakistan
๐Ÿ‡ต๐Ÿ‡ผ Palau
๐Ÿ‡ต๐Ÿ‡ฆ Panama
๐Ÿ‡ต๐Ÿ‡ฌ Papua New Guinea
๐Ÿ‡ต๐Ÿ‡พ Paraguay
๐Ÿ‡ต๐Ÿ‡ช Peru
๐Ÿ‡ต๐Ÿ‡ญ Philippines
๐Ÿ‡ต๐Ÿ‡ณ Pitcairn Island
๐Ÿ‡ต๐Ÿ‡ซ Polynesia
๐Ÿ‡ต๐Ÿ‡ท Puerto Rico
๐Ÿ‡ถ๐Ÿ‡ฆ Qatar
๐Ÿ‡ท๐Ÿ‡ช Reunion
๐Ÿ‡ท๐Ÿ‡ด Romania
๐Ÿ‡ท๐Ÿ‡ผ Rwanda
๐Ÿ‡ธ๐Ÿ‡ญ Saint Helena
๐Ÿ‡ฐ๐Ÿ‡ณ Saint Kitts and Nevis
๐Ÿ‡ฑ๐Ÿ‡จ Saint Lucia
๐Ÿ‡ต๐Ÿ‡ฒ Saint Pierre and Miquelon
๐Ÿ‡ป๐Ÿ‡จ Saint Vincent and the Grenadines
๐Ÿ‡ผ๐Ÿ‡ธ Samoa
๐Ÿ‡ธ๐Ÿ‡ฒ San Marino
๐Ÿ‡ธ๐Ÿ‡น Sao Tome and Principe
๐Ÿ‡ธ๐Ÿ‡ฆ Saudi Arabia
๐Ÿ‡ธ๐Ÿ‡ณ Senegal
๐Ÿ‡ท๐Ÿ‡ธ Serbia
๐Ÿ‡ธ๐Ÿ‡จ Seychelles
๐Ÿ‡ธ๐Ÿ‡ฑ Sierra Leone
๐Ÿ‡ฒ๐Ÿ‡ซ Saint-Martin
๐Ÿ‡ธ๐Ÿ‡ฝ Sint Maarten
๐Ÿ‡ธ๐Ÿ‡ฐ Slovakia
๐Ÿ‡ธ๐Ÿ‡ฎ Slovenia
๐Ÿ‡ธ๐Ÿ‡ง Solomon Islands
๐Ÿ‡ธ๐Ÿ‡ด Somalia
๐Ÿ‡ฟ๐Ÿ‡ฆ South Africa
๐Ÿ‡ฌ๐Ÿ‡ธ South Georgia and the South Sandwich Islands
๐Ÿ‡ธ๐Ÿ‡ธ South Sudan
๐Ÿ‡ฑ๐Ÿ‡ฐ Sri Lanka
๐Ÿ‡ธ๐Ÿ‡ฉ Sudan
๐Ÿ‡ธ๐Ÿ‡ท Suriname
๐Ÿ‡ธ๐Ÿ‡ฏ Svalbard and Jan Mayen Islands
๐Ÿ‡ธ๐Ÿ‡ฟ Swaziland
๐Ÿ‡จ๐Ÿ‡ญ Switzerland
๐Ÿ‡ธ๐Ÿ‡พ Syria
๐Ÿ‡น๐Ÿ‡ผ Taiwan
๐Ÿ‡น๐Ÿ‡ฏ Tajikistan
๐Ÿ‡น๐Ÿ‡ฟ Tanzania
๐Ÿ‡น๐Ÿ‡ฌ Togo
๐Ÿ‡น๐Ÿ‡ฐ Tokelau
๐Ÿ‡น๐Ÿ‡ด Tonga
๐Ÿ‡น๐Ÿ‡น Trinidad and Tobago
๐Ÿ‡น๐Ÿ‡ณ Tunisia
๐Ÿ‡น๐Ÿ‡ท Turkey
๐Ÿ‡น๐Ÿ‡ฒ Turkmenistan
๐Ÿ‡น๐Ÿ‡จ Turks and Caicos Islands
๐Ÿ‡น๐Ÿ‡ป Tuvalu
๐Ÿ‡บ๐Ÿ‡ฌ Uganda
๐Ÿ‡บ๐Ÿ‡พ Uruguay
๐Ÿ Hawaii
๐Ÿ‡บ๐Ÿ‡ฒ USA Minor Outlying Islands
๐Ÿ‡บ๐Ÿ‡ฟ Uzbekistan
๐Ÿ‡ป๐Ÿ‡บ Vanuatu
๐Ÿ‡ป๐Ÿ‡ฆ Vatican City
๐Ÿ‡ป๐Ÿ‡ช Venezuela
๐Ÿ‡ป๐Ÿ‡ณ Vietnam
๐Ÿ‡ป๐Ÿ‡ฌ British Virgin Islands
๐Ÿ‡ป๐Ÿ‡ฎ United States Virgin Islands
๐Ÿ‡ผ๐Ÿ‡ซ Wallis and Futuna Islands
๐Ÿ‡ช๐Ÿ‡ญ Western Sahara
๐Ÿ‡พ๐Ÿ‡ช Yemen
๐Ÿ‡ฟ๐Ÿ‡ฒ Zambia
๐Ÿ‡ฟ๐Ÿ‡ผ Zimbabwe
Apply for this job
๐Ÿ’ต Salary
๐ŸŽช Benefits
๐Ÿ‘ฉโ€๐Ÿ”ฌ Analytics Remove this filter
๐Ÿค“ Engineer Remove this filter
๐Ÿง Linux Remove this filter
๐Ÿ›  Technical Remove this filter
โŒ Clear 11 results
Nomad Insurance by SafetyWing
Global health coverage for remote workers and nomads

Ad


SingleStore


๐Ÿ’ฐ $50k - $105k*

Support

 

Cloud

Scala

Management

Sales

Engineering

Recruitment

Apache

Executive

SingleStore

Apply now

๐Ÿ‘€ 1,599 views

โœ… 140 applied (9%)

Share this job:
Get a rok.co short link

SingleStore is hiring a

Remote Solutions Engineer India

Position Overview\n\nWe are looking for a Solutions Engineer at SingleStore who is passionate about removing data bottlenecks for their customers and enabling real-time data capabilities to some of the most difficult data challenges in the industry. In this role you will work directly with our sales teams, and channel partners to identify prospective and current customer pain points where SingleStore can remove those bottlenecks and deliver real-time capabilities. You will provide value-based demonstrations, presentations, and support proof of concepts to validate proposed solutions.\n\nAs a solutions engineer at SingleStore, you must share our passion for real-time data, fast analytics, and simplified data architecture. You must be comfortable in both high executive conversations as well as being able to deeply understand the technology and its value-proposition.\nAbout our Team\n\nAt SingleStore, the Solutions Engineering team epitomizes a dynamic blend of innovation, expertise, and a fervent commitment to meeting complex data challenges head-on. This team is composed of highly skilled individuals who are not just adept at working with the latest technologies but are also instrumental in ensuring that SingleStore is the perfect fit for our customers.\n\nOur team thrives on collaboration and determination, building some of the most cutting-edge deployments of SingleStore data architectures for our most strategic customers. This involves working directly with product management to ensure that our product is not only addressing current data challenges but is also geared up for future advancements.\n\nBeyond the technical prowess, our team culture is rooted in a shared passion for transforming how businesses leverage data. We are a community of forward-thinkers, where each member's contribution is valued in our collective pursuit of excellence. Our approach combines industry-leading engineering, visionary design, and a dedicated customer success ethos to shape the future of database technology. In our team, every challenge is an opportunity for growth, and we support each other in our continuous learning journey. At SingleStore, we're more than a team; we're innovators shaping the real-time data solutions of tomorrow.\nResponsibilities\n\n\n* Engage with both current and prospective clients to understand their technical and business challenges\n\n* Present and demonstrate SingleStore product offering to fortune 500 companies.\n\n* Enthusiastic about the data analytics and data engineering landscape\n\n* Provide valuable feedback to product teams based on client interactions\n\n* Stay up to date with database technologies and the SingleStore product offerings\n\n\n\nQualifications\n\n\n* Minimum of 3 years experience in a technical pre-sales role\n\n* Broad range of experience within large-scale  database and/or data warehousing technologies\n\n* Excellent presentation and communication skills, with experience presenting to large corporate organizations\n\n* Experience with Kubernetes and Linux is a plus\n\n* Ability to communicate complex technical concepts for non-technical audiences.\n\n* Strong team player with interpersonal skills\n\n* Preferred experience with data engineering tools  such as Apache Spark, Apache Kafka, and ETL Tools such as Talend or Informatica.  \n\n* Next generation' personality (Cloud, AI experience)\n\n* Experience with some cutting edge Cloud Products/Companies\n\n* Demonstrated proficiency in ANSI SQL query languages\n\n* Demonstrated proficiency in Python, Scala or Java\n\n* Understanding of private and public cloud platforms such as AWS, Azure, GCP, VMware\n\n* Preferred experience with prompt Engineering\n\n\n\n\n\n\n\nSingleStore delivers the cloud-native database with the speed and scale to power the worldโ€™s data-intensive applications. With a distributed SQL database that introduces simplicity to your data architecture by unifying transactions and analytics, SingleStore empowers digital leaders to deliver exceptional, real-time data experiences to their customers. SingleStore is venture-backed and headquartered in San Francisco with offices in Sunnyvale, Raleigh, Seattle, Boston, London, Lisbon, Bangalore, Dublin and Kyiv. \n\nConsistent with our commitment to diversity & inclusion, we value individuals with the ability to work on diverse teams and with a diverse range of people.\n\nTo all recruitment agencies: SingleStore does not accept agency resumes. Please do not forward resumes to SingleStore employees. SingleStore is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company.\n\n#li-remote #remote-li \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to Cloud, Sales and Engineer jobs that are similar:\n\n $50,000 — $105,000/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
\n\n#Location\nPune, Maharashtra, India
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Wrike Careers Page


๐Ÿ’ฐ $60k - $90k*

SaaS

 

Ansible

 

Rabbitmq

Grafana

System

Back-End

Security

Web

Java

Cloud

PostgreSQL

Database Admin

Management

Senior

Operations

Excel

Recruitment

Wrike Careers Page

Apply now

๐Ÿ‘€ 903 views

โœ… 32 applied (4%)

Share this job:
Get a rok.co short link

Wrike Careers Page is hiring a

Remote DBA Infrastructure Engineer

\nAs a Staff Cloud Ops Engineer at Wrike, your primary strength lies in PostgreSQL, where you excel in designing, managing, and optimizing internal database systems. Your advanced skills extend to cloud and data center infrastructure, emphasizing security, containers, networking, monitoring, automation, and debugging. You proactively define tasks based on team objectives, guide others, and propose impactful infrastructure improvements.\n\nYou would have the opportunity to engage in high-end technical projects, including the implementation of Kafka, development of an internal database management system built on PostgreSQL, and collaboration on command and control systems within an international team. \n\nIn this role, you would join a core development team of 250+ engineers developing Wrike and become a part of the whole operations department which is exposed to various technologies and systems. Does this sound like you? If your answer is yes, we'd love to speak with you!\n\n\nMore about Your team\n\nWe have 14 folks in the SysOps Department, consisting of two teams distributed in Prague, Cyprus, and Tallinn. As a core team member of our team you will be:\n\n\n* Managing the Wrike product infrastructure with a focus on data infrastructure\n\n* Designing reliable solutions to ensure a product uptime SLA of 99.99%\n\n* Working with GCP and other providers in the IaC paradigm\n\n* Implementing and supporting new infrastructure services\n\n* Actively participating in incident response and management, including on-call duties\n\n* Developing and maintaining professional connections within and outside of the team\n\n* Driving end-to-end completion of significant projects, supported by senior team members\n\n\n\n\n\nTechnical Environment:\n\nWe run a Java based SaaS application in Kubernetes for a massive audience of over 20,000 organizations. Our data resides in more than 100 highly available PostgreSQL clusters across our production and pre-production environments. To ensure seamless data access, we've implemented robust data streaming pipelines. These pipelines empower our back-end and analytics teams to swiftly retrieve accurate data, contributing to the efficiency and success of our operations.\n\n\nKey technologies and tools include:\n\n\n* PostgreSQL as DB platform\n\n* Kafka and rabbitmq for messaging\n\n* Kubernetes and helm (Service-oriented architecture)\n\n* Nginx, HAproxy and Istio for load balancing\n\n* GCP, AWS and Cloudflare are our cloud providers\n\n* Puppet, Ansible and Terraform for defining everything as a code\n\n* Python to automate everything\n\n* Prometheus (VictoriaMetrics) and Zabbix for monitoring\n\n* Graylog, Logstash, Fluentd for logging\n\n* Jenkins and Gitlab-CI for building pipelines\n\n\n\n\n\nYou will achieve your best if you have\n\n\n* Advanced experience in designing, managing, and optimizing database management systems based on PostgreSQL\n\n* Advanced knowledge in at least two of the following areas, intermediate knowledge of the rest: Data networks, Security, Databases, Cloud providers, Process automation, Containerised application management\n\n* Advanced Linux administration skills with experience in maintaining highly available infrastructure for web application stack\n\n* Sufficient scripting skills in Python/Bash or other scripting languages\n\n* Upper Intermediate English skills\n\n\n\n\n\nYou will stand out with\n\n\n* Advanced experience running Kubernetes platform\n\n* Advanced experience with any Cloud Provider management using IAC (AWS/GCP/Azure).\n\n* Strong understanding of Linux fundamentals: security principles, hardware, troubleshooting, etc.\n\n* Monitoring experience with Zabbix, Grafana or Prometheus\n\n* Advanced experience with any of System Configuration Management tools (Ansible/Puppet/Salt etc.)\n\n\n\n\n\nPerks of working with Wrike\n\n\n28 calendar days of paid vacation\n\nSick leave compensation\n\nLife insurance plan\n\nHealth insurance plan\n\nFitness plan (800 EUR/year)\n\nParental leave\n\n2 volunteer days\n\nFull-remote & On-demand access to Co-working space\n\nUtility allowance (30 EUR/month, subject to taxation)\n\n\n\n\nYour recruitment buddy will be Marketa Rezacova, IT Recruiter.\n\n\n#LI-MR1 \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to SaaS, Java, Cloud, Senior and Engineer jobs that are similar:\n\n $60,000 — $90,000/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
\n\n#Location\nTallinn, Harju, Estonia
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Coalfire


๐Ÿ’ฐ $60k - $100k*

Jira

 

SaaS

 

System

Security

Training

Consulting

Support

Software

Testing

Code

DevOps

Edu

Cloud

Management

Junior

Operations

Operational

Reliability

Health

Engineering

Coalfire

Apply now

๐Ÿ‘€ 1,536 views

โœ… 131 applied (9%)

Share this job:
Get a rok.co short link

Coalfire is hiring a

Remote Junior Site Reliability Engineer US

\nAbout Coalfire\nCoalfire is on a mission to make the world a safer place by solving our clientsโ€™ toughest cybersecurity challenges. We work at the cutting edge of technology to advise, assess, automate, and ultimately help companies navigate the ever-changing cybersecurity landscape. We are headquartered in Denver, Colorado with offices across the U.S. and U.K., and we support clients around the world.   \nBut thatโ€™s not who we are โ€“ thatโ€™s just what we do.  \n \nWe are thought leaders, consultants, and cybersecurity experts, but above all else, we are a team of passionate problem-solvers who are hungry to learn, grow, and make a difference.    \n \nAnd weโ€™re growing fast.  \n \nWeโ€™re looking for a Site Reliability Engineer I to support our Managed Services team. \n\n\nPosition Summary\nAs a Junior Site Reliability Engineer at Coalfire within our Managed Services (CMS) group, you will be a self-starter, passionate about cloud technology, and thrive on problem solving. You will work within major public clouds, utilizing automation and your technical abilities to operate the most cutting-edge offerings from Cloud Service Providers (CSPs). This role directly supports leading cloud software companies to provide seamless reliability and scalability of their SaaS product to the largest enterprises and government agencies around the world.\n \nThis can be a remote position (must be located in the United States).\n\n\n\nWhat You'll Do\n* Become a member of a highly collaborative engineering team offering a unique blend of Cloud Infrastructure Administration, Site Reliability Engineering, Security Operations, and Vulnerability Management across multiple clients.\n* Coordinate with client product teams, engineering team members, and other stakeholders to monitor and maintain a secure and resilient cloud-hosted infrastructure to established SLAs in both production and non-production environments.\n* Innovate and implement using automated orchestration and configuration management techniques. Understand the design, deployment, and management of secure and compliant enterprise servers, network infrastructure, boundary protection, and cloud architectures using Infrastructure-as-Code.\n* Create, maintain, and peer review automated orchestration and configuration management codebases, as well as Infrastructure-as-Code codebases. Maintain IaC tooling and versioning within Client environments.\n* Implement and upgrade client environments with CI/CD infrastructure code and provide internal feedback to development teams for environment requirements and necessary alterations. \n* Work across AWS, Azure and GCP, understanding and utilizing their unique native services in client environments.\n* Configure, tune, and troubleshoot cloud-based tools, manage cost, security, and compliance for the Clientโ€™s environments.\n* Monitor and resolve site stability and performance issues related to functionality and availability.\n* Work closely with client DevOps and product teams to provide 24x7x365 support to environments through Client ticketing systems.\n* Support definition, testing, and validation of incident response and disaster recovery documentation and exercises.\n* Participate in on-call rotations as needed to support Client critical events, and operational needs that may lay outside of business hours.\n* Support testing and data reviews to collect and report on the effectiveness of current security and operational measures, in addition to remediating deviations from current security and operational measures.\n* Maintain detailed diagrams representative of the Clientโ€™s cloud architecture.\n* Maintain, optimize, and peer review standard operating procedures, operational runbooks, technical documents, and troubleshooting guidelines\n\n\n\nWhat You'll Bring\n* BS or above in related Information Technology field or equivalent combination of education and experience\n* 2+ years experience in 24x7x365 production operations\n* ยทFundamental understanding of networking and networking troubleshooting.\n* 2+ years experience installing, managing, and troubleshooting Linux and/or Windows Server operating systems in a production environment.\n* 2+ years experience supporting cloud operations and automation in AWS, Azure or GCP (and aligned certifications)\n* 2+ years experience with Infrastructure-as-Code and orchestration/automation tools such as Terraform and Ansible\n* Experience with IaaS platform capabilities and services (cloud certifications expected)\n* Experience within ticketing tool solutions such as Jira and ServiceNow\n* Experience using environmental analytics tools such as Splunk and Elastic Stack for querying, monitoring and alerting\n* Experience in at least one primary scripting language (Bash, Python, PowerShell)\n* Excellent communication, organizational, and problem-solving skills in a dynamic environment\n* Effective documentation skills, to include technical diagrams and written descriptions\n* Ability to work as part of a team with professional attitude and demeanor\n\n\n\nBonus Points\n* Previous experience in a consulting role within dynamic, and fast-paced environments\n* Previous experience supporting a 24x7x365 highly available environment for a SaaS vendor\n* Experience supporting security and/or infrastructure incident handling and investigation, and/or system scenario re-creation\n* Experience working within container orchestration solutions such as Kubernetes, Docker, EKS and/or ECS\n* Experience working within an automated CI/CD pipeline for release development, testing, remediation, and deployment\n* Cloud-based networking experience (Palo Alto, Cisco ASAv, etc.โ€ฆ)\n* Familiarity with frameworks such as FedRAMP, FISMA, SOC, ISO, HIPAA, HITRUST, PCI, etc.\n* Familiarity with configuration baseline standards such as CIS Benchmarks & DISA STIG\n* Knowledge of encryption technologies (SSL, encryption, PKI)\n* Experience with diagramming (Visio, Lucid Chart, etc.) \n* Application development experience for cloud-based systems\n\n\n\n\n\nWhy You'll Want to Join Us\n\n\nAt Coalfire, youโ€™ll find the support you need to thrive personally and professionally. In many cases, we provide a flexible work model that empowers you to choose when and where youโ€™ll work most effectively โ€“ whether youโ€™re at home or an office. \nRegardless of location, youโ€™ll experience a company that prioritizes connection and wellbeing and be part of a team where people care about each other and our communities. Youโ€™ll have opportunities to join employee resource groups, participate in in-person and virtual events, and more. And youโ€™ll enjoy competitive perks and benefits to support you and your family, like paid parental leave, flexible time off, certification and training reimbursement, digital mental health and wellbeing support membership, and comprehensive insurance options.  \n\n\nAt Coalfire, equal opportunity and pay equity is integral to the way we do business. A reasonable estimate of the compensation range for this role is $95,000 to $110,000 based on national salary averages. The actual salary offer to the successful candidate will be based on job-related education, geographic location, training, licensure and certifications and other factors. You may also be eligible to participate in annual incentive, commission, and/or recognition programs.All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. \n \n#LI-REMOTE \n#LI-JB1 \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to SaaS, DevOps, Cloud, Junior and Engineer jobs that are similar:\n\n $60,000 — $100,000/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
\n\n#Location\nUnited States
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Zelus Analytics


๐ŸŒ Probably worldwide
๐Ÿ’ฐ $60k - $110k*

Design

 

System

 

Back-End

Python

Support

Software

Test

Senior

Junior

Stats

Recruiting

Engineering

Full-Time

Zelus Analytics is hiring a

Remote Data Engineer

\nWe are seeking Data Engineers with a passion for sports to develop cloud-based data pipelines and automated data processing for our world-class sports intelligence platforms in baseball, basketball, cricket, eSports, football (American), golf, hockey, soccer, and tennis. Through your work, you can support the professional teams in our exclusive partner network in their efforts to compete and win championships. \n\nZelus Analytics is a fully remote company working directly with teams across the NBA, MLB, NFL, IPL and NHL, in addition to a number of soccer teams around the globe. Zelus unites a fast-growing startup environment with a research-focused culture that embraces our core values of integrity, innovation, and inclusion. We pride ourselves on providing meaningful mentorship that offers our team the opportunity to develop and expand their skill sets while also engaging with the broader analytics community. In so doing, we hope to create a new path for a broader group of highly talented people to push the cutting edge of sports analytics.\n\nWe believe that a diverse team is vital to building the worldโ€™s best sports intelligence platform. Thus, we strongly encourage you to apply if you identify with any marginalized community across race, ethnicity, gender, sexual orientation, veteran status, or disability. At Zelus, we are committed to creating an inclusive environment where all of our employees are enabled and empowered to succeed and thrive.\n\nAs Zelus employees advance in experience and level, they are expected to build on their competencies and expertise and demonstrate increasing impact, independence, and leadership within their roles.\n\nMore specifically, as a Zelus Data Engineer, you will be expected to:\n\n\n* Design, develop, document, and maintain the schemas and ETL pipelines for our internal sports databases and data warehouses\n\n* Implement and test collection, mapping, and storage procedures for secure access to team, league, and third-party data sources\n\n* Develop algorithms for quality assurance and imputation to prepare data for exploratory analysis and quantitative modeling\n\n* Profile and optimize automated data processing tasks\n\n* Coordinate with data providers around planned changes to raw data feeds\n\n* Deploy and maintain system and database monitoring tools\n\n* Collaborate and communicate effectively in a distributed work environment\n\n* Fulfill other related duties and responsibilities, including rotating platform support\n\n\n\n\nAdditionally, a Data Engineer II will be expected to:\n\n\n* Create data ingestion and integration workflows that scale and can be easily adapted to future use cases\n\n* Assess, provision, monitor, and maintain the appropriate infrastructure and tooling to execute data engineering workflows\n\n\n\n\nAdditionally, a Senior Data Engineer will be expected to:\n\n\n* Research, design, and test generalizable software architectures for data ingestion, processing, and integration and guide organizational adoption\n\n* Collaborate with data science to design and implement vendor-agnostic data models that support downstream modeling efforts\n\n* Lead team-wide implementation of data engineering standards\n\n* Effectively communicate complex technical concepts to both internal and external audiences\n\n* Provide guidance and technical mentorship for junior engineers \n\n* Assist with recruiting and outreach for the engineering team, including building a diverse network of future candidates\n\n\n\n\nAdditionally, a Senior Data Engineer II will be expected to:\n\n\n* Identify and implement generalizable strategies for infrastructure maintenance and data-related cost savings\n\n* Break down complex data engineering projects into actionable work plans including proposed task assignments with clear design specifications\n\n* Assist in defining data engineering standards for the organization\n\n\n\n\nA qualified Data Engineer candidate will be able to demonstrate several of the following and will be excited to learn the rest through the mentorship provided at Zelus:\n\n\n* Academic and/or industry experience in back-end software design and development\n\n* Experience with ETL architecture and development in a cloud-based environment\n\n* Fluency in SQL development and an understanding of database and data warehousing technologies\n\n* Proficiency with Python (preferred), Scala, and/or other data-oriented programming languages\n\n* Experience with automated data quality validation across large data sets\n\n* Familiarity working with Linux servers in a virtualized/distributed environment\n\n* Strong software-engineering and problem-solving skills\n\n\n\n\nA qualified Senior Data Engineer candidate will be able to demonstrate all of the above at a higher level of competency plus the following:\n\n\n* Expertise developing complex databases and data warehouses for large-scale, cloud-based analytics systems\n\n* Experience with task orchestration and workflow automation tools\n\n* Experience building and overseeing team-wide data quality initiatives\n\n* Experience adapting, retraining, and retooling in a rapidly changing technology environment\n\n* Desire and ability to successfully mentor junior engineers\n\n\n\n\nStarting salaries range from*:\n\n\n* $87,000 to $102,000 for Data Engineer\n\n* $102,000 to $118,000 for Data Engineer II\n\n* $118,000 to $136,000 for Senior Data Engineer\n\n* $136,000 to $160,000 for Senior Data Engineer II\n\n\n\n\n*Compensation paid in non-US currency will be in a comparable range adjusted by differences in total cost of employment.\n\nZelus has a fully distributed workforce, spanning multiple states and countries, with a formal process for establishing compensation equity across its global staff. In addition to competitive salaries, our full-time compensation packages include equity grants and comprehensive benefits, such as an annual incentive bonus plan, supplemental health, vision, and dental insurance, and flexible PTO, all of which allow us to attract and retain a world-class team.\n\nAs an equal opportunity employer, Zelus does not discriminate on the basis of race, ethnicity, color, religion, creed, gender, gender expression or identification, sexual orientation, marital status, age, national origin, disability, genetic information, military status, or any other characteristic protected by law. It is our policy to provide reasonable accommodations for applicants and employees with disabilities. Please let us know if reasonable accommodation is needed to participate in the job application or interview process.\n\nIn most jurisdictions, Zelus is an at-will employer; employment at Zelus is for an indefinite period of time and is subject to termination by the employer or the employee at any time, with or without cause or notice. \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Senior, Engineer and Linux jobs that are similar:\n\n $60,000 — $110,000/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Spotter


๐Ÿ’ฐ $50k - $80k*

Amazon

 

System

 

Security

Docker

Support

Software

Testing

Code

Web

Scrum

Voice

Quality Assurance

DevOps

Lambda

Cloud

Ads

Senior

Reliability

Engineering

Spotter

Apply now

๐Ÿ‘€ 2,468 views

โœ… 177 applied (7%)

Share this job:
Get a rok.co short link

Spotter is hiring a

Remote Senior AWS Cloud Engineer

\nWhat Youโ€™ll Do:\n\nWeโ€™re looking for a talented and intensely curious Senior AWS Cloud Engineer who is nimble and focused with a startup mentality.  In this newly created role you will be the liaison between data engineers, data scientists and analytics engineers.  You will work to create cutting-edge architecture that provides the increased performance, scalability and concurrency for Data Science and Analytics workflows.\n\n Responsibilities\n\n\n* Provide AWS Infrastructure support and Systems Administration in support of new and existing products implemented thru: IAM, EC2, S3, AWS Networking (VPC, IGW, NGW, ALB, NLB, etc.), Terraform, Cloud Formation templates and Security: Security Groups, Guard Duty, Cloud Trail, Config and WAF.\n\n* Monitor and maintain production, development, and QA cloud infrastructure resources for compliance with all Six Pillars of AWS Well-Architected Framework - including Security Pilar.\n\n* Develop and maintain Continuous Integration (CI) and Continuous Deployment (CD) pipelines needed to automate testing and deployment of all production software components as part of a fast-paced, agile Engineering team. Technologies required: ElastiCache, Bitbucket Pipelines, Github, Docker Compose, Kubernetes, Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS) and Linux based server instances.\n\n* Develop and maintain Infrastructure as Code (IaC) services for creation of ephemeral cloud-native infrastructure hosted on Amazon Web Services (AWS) and Google Cloud Platform (GCP). Technologies required: AWS AWS Cloud Formation, Google Cloud Deployment Manager, AWS SSM, YAML, JSON, Python.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 99.99% uptime. Technologies required: AWS IAM, AWS Cloud Watch, AWS Event Bridge, AWS SSM, AWS SQS, AWS SNS, AWS Lambda and Step Functions, Python, Java, RDS Postgres, RDS MySQL, AWS S3, Docker, AWS Elasticsearch, Kibana, AWS Amplify.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 100% cybersecurity compliance and surveillance. Technologies required: AWS SSM, YAML, JSON, Python, RDS Postgres, Tenable, CrowdStrike EPP, Sophos EPP, Wiz CSPM, Linux Bash scripts.\n\n* Design and code technical solutions that improve the scalability, performance, and reliability of all Data Acquisition pipelines. Technologies required: Google ADs APIs, Youtube Data APIs, Python, Java, AWS Glue, AWS S3, AWS SNS, AWS SQS, AWS KMS, AWS RDS Postgres, AWS RDS MySQL, AWS Redshift.\n\n* Monitor and remediate server and application security events as reported by CrowdStrike EPP, Tenable, WIZ CSPM, Invicti\n\n\n\n\nWho you are:\n\n\n* Minimum of 5 years of System Administration or Devops Engineering experience on AWS\n\n* Track record of success in System Administration, including System Design, Configuration, Maintenance, and Upgrades\n\n* Excels in architecting, designing, developing, and implementing cloud native AWS platforms and services.\n\n* Knowledgeable in managing cloud infrastructure in a production environment to ensure high availability and reliability.\n\n* Proficient in automating system deployment, operation, and maintenance using Infrastructure as Code - Ansible, Terraform, CloudFormation, and other common DevOps tools and scripting.\n\n* Experienced with Agile processes in a structured setting required; Scrum and/or Kanban.\n\n* Security and compliance standards experience such as PCI and SOC as well as data privacy and protection standards, a big plus.\n\n* Experienced in implementing Dashboards and data for decision-making related to team and system performance, rely heavily on telemetry and monitoring.\n\n* Exceptional analytical capabilities.\n\n* Strong communication skills and ability to effectively interact with Engineering and Business Stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n*   Bachelor's degree in technology, engineering, or related field\n\n*   AWS Certifications โ€“ Solutions Architect, DevOps Engineer etc.\n\n\n\n\nWhy Spotter:\n\n\n* Medical and vision insurance covered up to 100%\n\n* Dental insurance\n\n* 401(k) matching\n\n* Stock options\n\n* Complimentary gym access\n\n* Autonomy and upward mobility\n\n* Diverse, equitable, and inclusive culture, where your voice matters\n\n\n\n\nIn compliance with local law, we are disclosing the compensation, or a range thereof, for roles that will be performed in Culver City. Actual salaries will vary and may be above or below the range based on various factors including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. A reasonable estimate of the current pay range is: $100-$500K salary per year. The range listed is just one component of Spotterโ€™s total compensation package for employees. Other rewards may include an annual discretionary bonus and equity. \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to Docker, Testing, DevOps, Cloud, Senior, Engineer and Linux jobs that are similar:\n\n $50,000 — $80,000/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
\n\n#Location\nLos Angeles, California, United States
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Chan Zuckerberg Biohub - San Francisco


๐Ÿ’ฐ $58k - $85k*

Consulting

 

Recruiter

Support

Software

Growth

Director

Edu

Cloud

Node

Management

Biology

Engineering

Chan Zuckerberg Biohub - San Francisco

Apply now

๐Ÿ‘€ 884 views

โœ… 34 applied (4%)

Share this job:
Get a rok.co short link

Chan Zuckerberg Biohub - San Francisco is hiring a

Remote AI ML HPC Principal Engineer

\nThe Opportunity\n\nThe Chan Zuckerberg Biohub Network has an immediate opening for an AI/ML High Performance Computing (HPC) Principal Engineer.  The CZ Biohub Network is composed of several new institutes that the Chan Zuckerberg Initiative created to do great science that cannot be done in conventional environments.  The CZ Biohub Network brings together researchers from across disciplines to pursue audacious, important scientific challenges. The Network consists of four institutes throughout the country; San Francisco, Silicon Valley, Chicago and New York City.  Each institute closely collaborates with the major universities in its local area.  Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports several 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the country, with the mission of understanding the mysteries of the cell and how cells interact within systems.\n\nThe Biohub is expanding its global scientific leadership, particularly in the area of AI/ML, with the acquisition of the largest GPU cluster dedicated to AI for biology. The AI/ML HPC Principal Engineer will be tasked with helping to realize the full potential of this capability in addition to providing advanced computing capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other computing needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ€“\n\n\n* Bachelorโ€™s Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production compute, interconnect, storage hardware, software systems, storage subsystems\n\n* Configuring and administering parallel, network attached storage (Lustre, GPFS on ESS, NFS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, VAST, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.) and implementing fairshare, node sharing, backfill etc.. for compute and GPUs\n\n* Red Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* OpenACC, nvhpc, understanding of cuda driver compatibility issues\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software (Modules, SPACK)\n\n* Familiarity with source control tools (Git or SVN)\n\n* Experience with supporting use of popular ML frameworks such as Pytorch, Tensorflow\n\n* Familiarity with cybersecurity tools, methodologies, and best practices for protecting systems used for science\n\n* Experience with movement, storage, backup and archive of large scale data\n\n\n\n\nNice to have - \n\n\n* An advanced degree is strongly desired\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโ€™s skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Node, Engineer and Linux jobs that are similar:\n\n $57,500 — $85,000/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Chan Zuckerberg Biohub - San Francisco


๐Ÿ’ฐ $50k - $85k*

Consulting

 

Recruiter

Support

Software

Growth

Director

Edu

Cloud

Management

Biology

Engineering

Chan Zuckerberg Biohub - San Francisco

Apply now

๐Ÿ‘€ 997 views

โœ… 46 applied (5%)

Share this job:
Get a rok.co short link

Chan Zuckerberg Biohub - San Francisco is hiring a

Remote HPC Principal Engineer

\nThe Opportunity\n\nThe Chan Zuckerberg Biohub has an immediate opening for a High Performance Computing (HPC) Principal Engineer. The CZ Biohub is a one-of-a-kind independent non-profit research institute that brings together three leading universities - Stanford, UC Berkeley, and UC San Francisco - into a single collaborative technology and discovery engine. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports over 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the Bay Area, with the mission of understanding the underlying mechanisms of disease through the development of tools and technologies and the application to therapeutics and diagnostics.\n\nThis position will be tasked with strengthening and expanding the scientific computational capacity to further the Biohubโ€™s expanding global scientific leadership. The HPC Principal Engineer will also provide IT capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other IT needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ€“\n\n\n* Bachelorโ€™s Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable. An advanced degree is strongly desired.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively to implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production storage hardware, software systems, storage networks and systems\n\n* Configuring and administering parallel, network attached storage (Lustre, NFS, ESS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.)\nRed Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software\n\n* Familiarity with source control tools (Git or SVN)\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* Principal Engineer  = $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโ€™s skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Engineer and Linux jobs that are similar:\n\n $50,000 — $85,000/year\n
\n\n#Benefits\n ๐Ÿ’ฐ 401(k)\n\n๐ŸŒŽ Distributed team\n\nโฐ Async\n\n๐Ÿค“ Vision insurance\n\n๐Ÿฆท Dental insurance\n\n๐Ÿš‘ Medical insurance\n\n๐Ÿ– Unlimited vacation\n\n๐Ÿ– Paid time off\n\n๐Ÿ“† 4 day workweek\n\n๐Ÿ’ฐ 401k matching\n\n๐Ÿ” Company retreats\n\n๐Ÿฌ Coworking budget\n\n๐Ÿ“š Learning budget\n\n๐Ÿ’ช Free gym membership\n\n๐Ÿง˜ Mental wellness budget\n\n๐Ÿ–ฅ Home office budget\n\n๐Ÿฅง Pay in crypto\n\n๐Ÿฅธ Pseudonymous\n\n๐Ÿ’ฐ Profit sharing\n\n๐Ÿ’ฐ Equity compensation\n\nโฌœ๏ธ No whiteboard interview\n\n๐Ÿ‘€ No monitoring system\n\n๐Ÿšซ No politics at work\n\n๐ŸŽ… We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
Apply for this job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!

When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

403ms