PeerIQ is transforming the way lending and securitization markets work. Meeting the needs across the credit funding cycle – from loan purchasing to financing to securitization – we work with industry leaders to unlock capital at scale. We aim to bridge the gap between originators and the capital markets so that investors can invest with confidence. Our employees come from the technology and financial sectors, combining the best of both to change the game of consumer credit.

PeerIQ is looking for a multi-talented, intellectually curious and process-minded Data Analyst with hands-on experience supporting data processing routines (ETL processes) and validation of data quality. As such, you will play a key role in orchestrating and automating the data ingestion pipeline to make it more flexible while maintaining a high level of quality. You will also be expected to perform ad-hoc data analysis and data investigations and interact with internal teams and clients in order to resolve data issues. The ideal candidate will have worked in a data operations/analyst , preferably at an investment bank or at a fintech and will have a basic understanding of consumer credit and related securitized products.

You must have a strong sense of data ownership, passion for technology and automation, self-starter, and thrive in a fast paced, unstructured startup environment.

  • Partner closely with data engineering to build highly configurable, scalable, robust data processing infrastructure and applications
  • Support daily ETL and resolve data pipelines issues in a timely manner
  • Provide analytical support to Client Delivery team with data onboarding for new clients and recurring analysis including ad hoc reporting
  • Receive, inspect, validate, transform, clean, and load data received in a variety of formats (e.g., Excel, CVS, XML) from a variety of sources using various open-source applications (eg. Jupyter Notebook, PyCharm, bash etc)
  • Identify erroneous or anomalous data and take action to correct or remove errors in order to restore data sets to reliable levels
  • Document process and methodology


  • 3+ years of experience working as a Data Analyst, Data Scientist or Data Engineer role supporting ELT/ETL data pipelines and workflow orchestration tools (eg. Airflow)
  • 2+ years of experience working in leading financial institutions or in a high-growth startup handling large datasets on a daily basis
  • Experience working with Git versioning and with a broad set of data integration technologies and databases (Github portfolio is a big plus!)
  • Strong competency in SQL and Python (Object Oriented Programming)
  • Excellent verbal communication skills and attention to detail
  • Demonstrate sense of urgency in maintaining data integrity and experience automating recurring analyses
  • Excited to learn new technologies and data transformation/analysis in general
  • Proactively seek to automate data operations and collaborate with cross-functional teams to improve processes
  • Working knowledge of corporate banking and capital market products (nice to have)


Please also include the following along with your application (Mandatory).

  1. Description of the most interesting data analysis you’ve done, key findings, and its impact
  2. Link to or attachment of code you’ve written related to data analysis