Tekton Labs (51-200 Employees, 40% 2 Yr Employee Growth Rate)
We are a US software development company delivering high-quality, cost-effective custom application development to clients worldwide. As a technology consulting company, we also help our clients with their digital transformation process.
Currently, we are seeking a Teach Data Lead:
What You Will Be Doing:
- Identifying data sources, both internal and external and working out a plan for data management that is aligned with organizational data strategy.
- Developing and implementing an overall organizational data strategy that is in line with business processes. The strategy includes data model designs, database development standards, implementation and management of data warehouses and data analytics systems.
- Identifying data sources, both internal and external, and working out a plan for data management that is aligned with organizational data strategy.
- Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system.
- Managing end-to-end data architecture, from selecting the platform, designing the technical architecture, and developing the application to finally testing and implementing the proposed solution.
- Planning and execution of big data solutions using technologies such as Hadoop. In fact, the big data architect roles and responsibilities entail the complete life-cycle management of a Hadoop Solution.
Your Profile Includes:
- Knowledge of the following data tools: Airflow, Postgre Aurora, Fivetran.
- Experience working with Python, AWS and Apple Search Ads.
- Experience generating date file into internal format using Data Pipeline Infrastructure.
- Ability to implement common data management and reporting technologies, as well as the basics of columnar and NoSQL databases, data visualization, unstructured data, and predictive analytics.
- Understanding of predictive modeling, NLP and text analysis, Machine Learning (Desirable).
- Ingestion: Implement the data pipeline from Fivetran source (Postgres Aurora) to internal file generation.
- Load: Implement the data pipeline from the generated internal file during ingestion to loading into the client environment’s datastore. Below are the files that need to be loaded.
- Data source tables and related infrastructure preparation.
- Feature implemented as per requirements and as per Engineering Excellence guidelines.
- The implementation must follow documented playbook for integrating media sources through Fivetran.
- All code must pass CICD pipeline, including python linting, black formatting, and 100% test coverage for functional code using patterns.