Role :AWS data Architect
Job type :Fulltime
Location : Baltimore MD
Roles and responsibilities:
Must have Skills :
Data warehousing , pyspark , Github, AWS data platform, Glue, EMR, RedShift, databricks,Data Marts. DBT/Glue/EMR or Matillion,Bigdata ecosystem (Hadoop), data architecture, data engineering, data modelling, data consumption
- 10-15 years of total experience and at least 3+ years of expertise in Cloud data warehouse technologies on AWS data platform covering – Glue, EMR, RedShift, databricks etc.
- At least one End-to-end AWS data platform implementation is a must covering all aspects including architecture, design, data engineering, data visualization and data governance (specifically data quality and lineage).
- Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses, Data Lake and Data Marts.
- Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement.
- Significant experience with data migrations and development, design, Operational Data Stores, Enterprise Data Warehouses and Data Marts.
- Experience with cloud ETL and ELT in one of the tools like DBT/Glue/EMR or Matillion or any other ELT tool and exposure to Bigdata ecosystem (Hadoop).
- Expertise with at least one of the Traditional data warehouses solutions on Oracle, Teradata, and Oracle Exadata.
- Excellent communication skills to liaise with Business & IT stakeholders.
- Expertise in planning execution of a project and efforts estimation.
- Understanding of Data Vault, data mesh and data fabric architecture patterns.
- Exposure to working in Agile ways of working.
Thanks & Regards,
Sweety Singh
Tata Consultancy Services
Mail: ...@tcs.com
Website: