- Experience with analysis and creation of data pipelines, data architecture, ETL/ELT development and with processing structured and unstructured data, including post-go live activities.
- Ability to analyze data, identify issues (e.g., gaps, inconsistencies) and troubleshoot them.
- Experience with using data stored in RDBMSs and experience or some understanding of NoSQL databases.
- Knowledge of Scala and Spark, and a good understanding of the Hadoop ecosystem including Hadoop file formats like Parquet and ORC.
- Ability to write performant Scala code and SQL statements and can design modular, future proof solutions that are fit for purpose.
- Autonomy in working on Unix based systems.
- Experience in working with customers to identify and clarify requirements.
- A strong interest in FINTECH and technologies related to data.
- Collaborate closely with our machine learning engineers and/or cloud architects on projects, always with a quality focused, end-to-end approach.
- Support the reporting teams in the data exploration and data preparation phases.
- Implement data quality controls.
- Liaise with IT infrastructure teams to address infrastructure issues and to ensure that the components and software used on the platform are all consistent.
- Build and share knowledge with your colleagues while enjoying specialized training to keep you at the top of your field.
Nice to have:
- Experience with open-source technologies used in Data Analytics like Spark, Hive, HBase, Kafka, etc.
- Knowledge of Cloudera, IBM mainframe.
Apply now »