Responsibilities
* Implement scalable and reliable data engineering solutions for moving data efficiently across systems at near real time and batch
* Build a data store that will be central source of truth
* Implement tools that helps our data consumers to extract, analyse, and visualize data faster
* Build data expertise and own data quality for the pipelines you build
* Evaluate new technologies and build prototypes for continuous improvements in Data Engineering
Requirements
* Experience working with any database technologies from an application programming perspective Oracle, MySQL, Mongo DB etc.
* Big Data experience Spark, Hadoop, Spark SQL, preferably Elastic Search.
* Dev Ops Gradle experience.
* Preferably worked on a microservice API based implementation.
* Have excellent knowledge and understanding of Big Data technologies - Scala, Spark, Hadoop, Elastic Search
