A nationally operating federal department is looking for a Big Data Engineer to join one of their project teams.
The focus of this role will be to facilitate the development and deployment of the new capabilities within the organisation.
- Provide best practice advice and technical leadership on implementation of data streams within a Hadoop platform.
- Development of data streams using Apache NiFi, Spark and Kafka (or equivalent technologies).
- Vendor engagement/product evaluation and on-going maintenance of infrastructure deployed at regional sites
- Provide technical direction / advice to vendors performing professional services
Required Skills and Experience:
- Proven experience with the development of data streaming and processing pipelines using Apache NiFi, Apache Kafka, Apache HBase, HDFS, and Apache Spark.
- Data development and management experience including but not limited to data migration, modelling and analytics.
- Proven experience implementing role based access control within a analytics platform.
- Proven DevOps experience in large infrastructure environment, big data platform experience including but not limited to administration and support for a Hortonworks/Cloudera/Hadoop platform.
- Proven system implementation and integration skills, including but not limited to; systems configuration and change management process and documentation.
Due to the nature of work, Australian Citizenship is mandatory.
- Exciting initial 12 month contract
- Opportunity to become a valued member of a newly established program of work
- Office based in the centre of the Brisbane CBD
If you wish to apply for this position, please submit your resume by clicking the 'Apply Now' button. For further information please contact Dylan Sheoshker at Clicks IT Recruitment on 07 3027 2560.