As we continue to grow, we are looking for a:
Data Engineer
City: Belgrade
Key responsibilities:
· Designing
Data Models: Structuring data and creating schemas that record data and how it
relates to other data.
· Development
Data Pipeline: Design, build and maintain efficient and reliable data pipelines
and datasets.
· Data
Integration: Extracting, transforming and sending data to other systems and
tools for business intelligence. Integrating data from different sources while
ensuring consistency and accuracy.
· Data
Streaming: Building and maintenance of robust architectures for streaming data
for integration with machine learning models as well as for the reporting.
· Optimization:
Identifying and replacing data models and queries that lead to performance
problems. Troubleshoot systems and data issues that cause downtime or
unexpected behavior.
· Collaboration:
Working with other stakeholders to develop and deliver the data they need to
successful work.
· Data
Governance: Proactively identifying sensitive data and taking steps to address
its storage and use.
· Data
Quality: Building internal tools and reporting to identify poor quality
data and data gaps, allowing users to work with data without problems.
Our requirements:
· AWS Cloud Services: Use of key AWS services such
as Amazon S3, Amazon Glue, AWS DynamoDB, Amazon Redshift, Amazon Kinesis, AWS
DMS , AWS SQS etc.
· Knowledge
of AWS Lambda for serverless architectures.
· SQL
and Relational Database: Advanced knowledge of SQL for data management.
Experience in database modeling.
· Data
Warehousing: Understanding and experience working with data warehouses, such as
Amazon Redshift.
· ETL
(Extract, Transform, Load): Using tools for ETL processes such as AWS Glue.
· Programming
languages: knowledge of Python, Experience in Java or Scala
programming languages for
script and application development is an advantage.
· Tools:
AWS Glue and a tool for ETL processes that enables automatic discovery,
transformation and loading of data, Apache Spark as an engine for large scale
data processing, Apache Flink as an distributed streaming data-flow, etc.
· AWS
RDS (Relational Database Service): a service for managing and scaling
relational databases, including SQL Server, PostgreSQL and Oracle.
· AWS
DMS (Database Migration Service): a tool for data migration between different
types of databases.
· AWS
API Gateway: Working with API Gateway to manage, monitor and secure access to
APIs, enabling integrated communication between different parts of the system.
· AWS Kinesis: Amazon Kinesis is a group of AWS services for real-time
processing and analysis of big data.
· Advantages:
Knowledge of AWS IAM concept: service for Identity Access Management
Work Environment & Comfort:
Work-Life Balance:
Growth & Development:
Additional Perks: