PeerIQ is transforming the way lending and securitization markets work. Meeting the needs across the credit funding cycle – from loan purchasing to financing to securitization – we work with industry leaders to unlock capital at scale. We aim to bridge the gap between originators and the capital markets so that investors can invest with confidence. Our employees come from the technology and financial sectors, combining the best of both to change the game of consumer credit.

PeerIQ is looking for a multi-talented, intellectually curious and process-minded Lead Data Analyst with hands-on experience managing data processing routines (ETL processes) and validation of data quality. As such, you will play a key role in orchestrating and automating the data ingestion pipeline to make it more flexible while maintaining a high level of quality. You will also be expected to perform ad-hoc data analysis and data investigations and interact with internal teams and clients in order to resolve data issues. The ideal candidate will have worked in a data operations/analyst role , preferably at an investment bank or at a fintech and will have a basic understanding of consumer credit and related securitized products.

You must have a strong sense of data ownership, passion for technology and automation, self-starter, and thrive in a fast paced, unstructured startup environment.

RESPONSIBILITIES
  • Partner closely with data engineering to build highly configurable, scalable, robust data processing infrastructure and applications
  • Support daily ETL and resolve data pipelines issues in a timely manner
  • Provide analytical support to Client Delivery team with data onboarding for new clients and recurring analysis including ad hoc reporting
  • Receive, inspect, validate, transform, clean, and load data received in a variety of formats (e.g., Excel, CVS, XML) from a variety of sources using various open-source applications (eg. Jupyter Notebook, PyCharm, bash etc)
  • Identify erroneous or anomalous data and take action to correct or remove errors in order to restore data sets to reliable levels
  • Document process and methodology
  • Provide mentorship to junior analysts and help to grow the Data Team.
QUALIFICATIONS
  • 7+ years of experience working as a Data Analyst, Data Scientist or Data Engineer role supporting ELT/ETL data pipelines and workflow orchestration tools (eg. Airflow)
  • 5+ years of experience working in leading financial institutions or in a high-growth startup handling large datasets on a daily basis using variou applications such as Redshift, Jupyter etc.
  • Strong competency in SQL and Python (Object Oriented Programming)
  • Experience operating in an SDLC environment and deploying code (Github portfolio is a big plus!)
  • Strong analytical and technical skills to troubleshoot issues, analyze the cause, quickly come-up with the possible solutions, document the changes, and communicate organizational impact
  • Excellent verbal communication skills and attention to detail
  • Demonstrate sense of urgency in maintaining data integrity and experience automating recurring analyses
  • Excited to learn new technologies and data transformation/analysis in general
  • Proactively seek to automate data operations and collaborate with cross-functional teams to improve processes
  • Working knowledge of corporate banking and capital market products
  • Interested in learning more about consumer lending analytics and the general finance domain.