New York

Senior Data Engineer


The Cadre team is growing. We believe software is not only eating the world, it’s eating finance, and we’re feeding the beast. We’re building, from the ground up, a technology-driven trading platform for the coveted commercial real estate asset class, previously only accessible to the largest institutional investors.

Because we’re passionate about opening access to those historically exclusive assets to an ever broader group of participants, we’re relentlessly building a marketplace so intuitive, transparent and efficient individuals and institutions alike are empowered to invest. With technology as our core engine, we’re engineering machine learning and data science strategies to accelerate discovery and vet better assets, faster.

Cadre has raised roughly $70 million to date, from outstanding venture investors including Khosla Ventures, Thrive Capital, General Catalyst, Founders Fund, and Goldman Sachs, who have backed such companies as Spotify, Oscar, Jet, Stripe, Slack, ZocDoc, Github, Airbnb, and Snapchat.

Join our team, with individuals from Google, MongoDB, Square, Amplify, and

We’re looking for a Senior Data Engineer to join us in building out our Data Engineering team and platform.  We are focused on quantifying and automating every step of the investment process, from discovery to due diligence to asset management, through a data aggregation engine.  We love engaging and interactive ways to visualize data: we currently use a combination of Tableau, Mapbox, and D3.  You will play a key role in shaping our team, our roadmap, and our technical strategy.  


  • Work closely with fellow data and software engineers to scale our data processing platform to handle a wide variety of structured and unstructured data sources
  • Design, implement, and scale the end-end pipelines powering our data-driven investment models
  • Process data into visualization tools, generating market and asset level insights
  • Leverage machine learning techniques to increase the performance of our existing models
  • Work on novel ways to visualize and present data to business stakeholders


  • BS, MS, or PhD in computer science, engineering or other related field
  • 5+ years experience with data-intensive backend programming
  • You understand experimental design and can build for measurement and interpretation of results
  • You write clean, well-structured, production-quality code in Python, Java, Scala, or similar
  • You have built and deployed large-scale ETL pipelines: from data ingestion and processing to storage and validation
  • Experience with SQL and NoSQL data stores (Postgres, Redshift, Redis, Vertica, Cassandra, etc.)
  • Experience scraping websites with structured and unstructured data


  • Experience with large scale processing frameworks (Hadoop, Spark, Pig, HBase)
  • Experience with pandas, numpy, scipy, scikit-learn
  • You have built machine learning pipelines at a large scale



Cadre runs a modern, service-oriented stack, continuously integrated and deployed (Jenkins, Ansible) on AWS: React + Redux, Immutable.js, Stylus, Node, Koa, Python, Django, with a combination of SQL (Postgres) and NoSQL (Redis) data stores.  We use Tableau, Mapbox, and D3.js for analytics and visualization.

Join us and be a key influence on our infrastructure decisions:  we believe it’s important to always leverage the best tool for the job; you’ll be working with members of our team to keep refining our choices.


Interested?  To hear more, please send your resume over. Please feel free to provide any coding samples (Github, Stackoverflow, etc.) or blog posts, we’d love to take a look! Our interview process begins with a phone interview followed by an in-person interview with our engineers and company leaders.

Work with this stack