Oh Snap!
This job is no longer active - but you can still view the details below.

Data Platform Engineer / Hadoop Operations Engineer

| Greater NYC Area

About Foursquare:

Since our inception in 2009, Foursquare has been a leading force in changing how location information enriches our real-world and digital lives. As a location intelligence company, Foursquare is comprised of two well-known consumer apps, Foursquare and Swarm, as well as thriving media and enterprise products. Our B2B offerings include Places (for developers), Pinpoint and Attribution (for marketers), and Place Insights (for analysts, based on the world's largest foot traffic panel). With more than 200 people across our offices in New York, San Francisco, and in sales offices around the globe, we’re dedicated to our trailblazing mission—enriching consumer experiences and informing business decisions with location intelligence.

About the Infrastructure Team:

As a member of Foursquare's Infrastructure team, you will use your strong background in distributed systems to help build the core online and offline platforms that drive the services for our Enterprise and Consumer facing products. We're passionate about tackling tough infrastructure challenges (especially scaling problems), and look for others who like to dive deep into the code to help solve hard problems. You should be comfortable running with your own ideas and eager to learn new skills on a bleeding edge platform. We use a variety of tools, technologies, and languages to build software (e.g., Scala, Hadoop, Python, Thrift, MongoDB, Memcached, Redis, Kafka, Chef, Aurora, Mesos, RocksDB, Luigi, Pants), but experience with equivalent ones will do just fine.

Join us to help build and maintain the rock-solid core infrastructure upon on which we build our features. Here are some high level areas you could get involved in:

  • Build a cost-effective and seamless way to run pipelines across our on-premises Hadoop cluster and Amazon EMR.
  • Build and manage tools to analyze, monitor and optimize cpu, core, memory and disk utilization of services that run on our Aurora and Hadoop clusters.
  • Scale and modernize all aspects of our data lake storage systems.

Qualifications:

    • Minimum 3 years of work experience in maintaining, optimizing and resolution of issues on Hadoop clusters.
    • Good understanding of Hadoop ecosystem component architecture - HDFS, Yarn, Hive, Kafka, Spark, Kerberos.
    • Working knowledge of same.
    • Experienced with installation and upgrades of Hadoop clusters with best practices.
    • Experienced with Chef, Puppet or Ansible for configuration management.
    • Experienced with migrating Hadoop infrastructure from bare-metal to cloud platforms.
    • Good knowledge of AWS concepts is a plus.
    • Experience with Luigi scheduler is a plus.
    • Comfortable in a small and fast-paced startup environment.
    • Bachelors Degree or higher in Computer Science, Electrical Engineering or related field

Foursquare is proud to foster an inclusive environment that is free from discrimination. We strongly believe in order to build the best products, we need a diversity of perspectives and backgrounds. This leads to a more delightful experience for our users and team members. We value listening to every voice and we encourage everyone to come be a part of building a company and products we love.

Foursquare is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected Veteran status, or any other characteristic protected by law.

Read Full Job Description