Canopy is a Fintech for Fintechs. We provide an immutable, bitemporal system of record to Fintechs that offer diverse financial products, ranging from credit and debit cards to installment loans and more.
What you’ll do
As a Postgres Engineer, you will work within a highly collaborative, agile environment to build out core business functions. The engineering team works closely with the product team to understand business needs, turn them into clear, structured specifications, and implement those specs into highly performant code that can support clients in a highly regulated industry, evolve in lockstep with the business, and serve as a robust foundation for further feature development.
Given the need to guarantee immutability, while providing bitemporal facilities to implement financial operations over sensitive data – all happening within a highly regulated environment – experience with the financial sector and data immutability are a strong bonus.
Own problems end-to-end, thinking through everything from architecture, data modeling, performance, scalability, and compliance.
Write high-quality SQL and PL/pgSQL code that meets product requirements and conforms to established norms and standards.
Take initiative on documenting not only your own code, but updating and maintaining documentation throughout the codebase to optimize for readability and understanding for any current and future contributors.
Propagate knowledge throughout the organization and communicate effectively with stakeholders across the leadership, product, engineering, and QA teams.
Cliched or not – being a team player is absolutely critical. We are a fully-remote company so overcommunication is a standard. The business domain is highly complex so staying close with the engineering, product, and other teams is necessary to make effective progress.
8 or more years of experience with Postgres, MySQL, Oracle, or any industry-standard relational database.
Understanding of the query planner, query optimizer, stored procedures, and various indexing strategies is required.
Understanding of Common Table Expressions, recursive queries, window functions, timestamp options and when to use which, index options and when to use which, cost of joins, pros and cons of data normalization is required.
Experience with PL/pgSQL, Postgres extensions, and other procedural languages supported by PG is a strong bonus.
Understanding of or experience with Postgres internals, configuration options, etc is a bonus
Experience with Aurora, performance tuning, and logging and tracing within the AWS ecosystem is a bonus.
Comfort level and proven experience working in focused, nimble teams that execute on requirements, move fast, DO NOT break things, but build things in a maintainable, debuggable manner.
Strong conceptual understanding, and proven experience with, data architecture and modeling of complex, open-world domains into a structured, formalized, logically consistent, and performant schemas within the relational paradigm.
Facility with at least one other high-level programming language, e.g. Go, Node, Jave, Clojure, C etc.