Lead Data Engineer
Location: New York
NCC Media is the only media, data and technology company that represents video programming providers in every US market. Our mission is to provide national, regional and local marketers the unique ability to reach today’s consumers in premium television programming and in targeted online content on every screen.
NCC Media represents every major cable, satellite and telco service provider in the country and is jointly owned by three of the nation’s largest; Comcast, Spectrum, and Cox Communications. Our nationwide team of over 500 people and our commitment to constant innovation and growth make NCC Media the best choice for reaching connected consumers.
We are seeking a Data Engineering leader who will report to our CTO and be responsible for building and running our team of engineers. Our data engineering team is responsible for our data analytics and data pipeline applications using Spark, RedShift, Scala and Python. The team consists of data pipeline builders and data wranglers who develop and optimize our systems.
We inhale massive amounts of data from over 75 million cable boxes around the United States. We are developing unique and powerful data analytics and data pipeline systems on Amazon Web Services technologies (AWS EMR, AWS Data Pipeline, Glue, RedShift, etc.), and interact constantly with our product development and data science counterparts.
• Build the team… recruit, onboard, develop and manage a team of world class data engineers who will be building and designing our data pipeline
• Oversee the strategy for assembling, combining and transforming large multiple complex data sets
• Work with product and analytics teams to build analytics tools using the data pipeline that will provide actionable insights to end users
• Work with the team to design, build, and maintain efficient, reusable, and reliable code
• Devise the tools and processes for identifying bottlenecks and bugs, and devise solutions to mitigate and address these issues
• Help maintain code quality, organization, and automatization
• Experiene leading a “big data” enginering team
• Experience working cross-functionally with counterparts in product, analytics, data science, operations and other related functions
• Experience building and optimizing ‘big data’ data pipelines, architectures, data sets and tools (Hadoop, Spark, Presto, etc.)
• Advanced SQL knowledge and experience working with relational databases
• Expertise within AWS eco-system
• Familiar with various design and architectural patterns
• Knack for writing clean, readable, and easily maintainable, reusable code
• Experience implementing automated testing platforms and unit tests
• Proficient understanding of code versioning tools such as GIT, SVN, VSTS
• Academic credentials in computer science, computer engineering or a related major