about the company
our client helps organizations navigate complex challenges through cutting-edge technologies like artificial intelligence, cybersecurity, and automation.
about the role
you will be responsible for designing, building, and optimizing scalable data pipelines, ensuring efficient data processing and integration across various systems. The ideal candidate will have strong expertise in cloud technologies, ETL processes, and data architecture.
...
about the job
- design and implement scalable ETL pipelines for data processing
- define and implement data models for data warehouses and lakehouses
- conduct ETL job code reviews to ensure best practices and efficiency
- design and implement pipeline job orchestration for smooth data workflows
- ensure adoption of CI/CD for data product delivery within teams
- collaborate with stakeholders to address data infrastructure needs and resolve technical issues
knowledge, skills and experience
- in-depth knowledge of databases, data marts, data modeling, data governance, data security, and ETL solutions
- hands-on experience with big data processing, including batch and streaming technologies such as Talend and Kafka
- experience with cloud environments and microservices, with familiarity in platforms such as Google Cloud Platform and BigQuery
- experience working in on-prem data environments with a strong focus on security constraints
- familiarity with CI/CD delivery and incremental delivery practices
how to apply
interested candidates may contact Hua Hui at +6017 960 0313 for a confidential discussion.