Data Engineer (P638)
About Us:
As a Mid-Level Data Engineer at Kenility, you’ll join a tight-knit family of creative developers, engineers, and designers, who strive to develop and deliver the highest quality products into the market.
Technical Requirements:
- Bachelor's degree in Computer Science or Information Technology, or a comparable qualification.
- Proficient in AWS Glue or EMR for data transformation and processing, ensuring efficient data workflows.
- Skilled in building and deploying serverless functions with AWS Lambda to support various data tasks and integrations.
- Experienced with Step Functions or Apache Airflow for orchestration, enabling seamless automation and scheduling of complex data pipelines.
- Adept at utilizing Amazon S3 for secure, scalable data storage, ensuring data accessibility and security.
- Proficient in using GitLab for version control and managing CI/CD pipelines, including experience with GitLab-CI, AWS stack, or other CI/CD tools.
- Skilled in Infrastructure as Code (IaC) tools, including Terraform, CloudFormation, or similar, to automate infrastructure deployment and management.
- Experienced in SQL for data querying and management, ensuring optimized performance and data accuracy.
- Proficient in PySpark for processing large datasets, with a focus on data transformation, cleansing, and analysis.
- Knowledgeable in data formats like Hudi or Iceberg for efficient data lake management and optimized storage.
- Experienced with API Gateway for managing and securing APIs that support data integrations.
- Familiar with DynamoDB for NoSQL data storage, with an understanding of optimal data management practices.
- Capable of using CloudWatch, particularly Log Insights, to monitor and troubleshoot data pipelines for smooth operations.
- Experienced in data warehousing tools such as RDS for structured data management and support.
Soft Skills:
- Responsibility
- Proactivity
- Flexibility
- Great communication skills