Data Engineer
If you are passionate about modern data platforms and want to be part of shaping the future of Tryg, then we would like to hear from you!
Join us in creating value through data insights and by exploring modern data platforms to deliver value.
Why Tryg?
This is exactly the right moment to join us, as we focus strongly on digitalization and innovation of our services and business processes. Data is an incredibly important asset for Tryg, and you will make your mark on the platforms we develop.
Join our dedicated team of top professionals and make Tryg's strategic ambitions possible through the implementation of an advanced and comprehensive data architecture. You will be given great personal responsibility and extensive opportunities to learn and grow your knowledge.
Also we have multiple opportunities within Tryg to shape your Data engineering career.
About the Job | Data Engineer
Tryg has a long tradition for using data to understand risks. We are now taking this experience to the next level and expanding our activities to many different customer-centric initiatives. Customer touchpoint data is scattered in a multitude of different systems; therefore, we are building an integrated data platform for easy, purpose-driven, and compliant access to data.
In your role as our new Data Engineer, you will get full responsibility for implementing claim domain data sources in the data layer. The work comprises disciplines such as:
- Data analysis
- Data modeling
- Implementation of streaming ETL
- Data quality tests
Our tech stack is ever growing as we evolve and stay up to date with the technology As a Data Engineer you will mainly be exposed to:
- Java (Kstreams) – Java, in combination with our customer framework, is the main language used in all our data pipeline.
- SQL – This is purely used for analysis, as all pipeline work will be pure streaming
- Kafka – Kafka is used as the streaming platform. Our pipelines work purely on Kafka; both input and output
- GIT – Everything must go into a Git repository, and together with our DevOps team we are moving toward a GitOps setup.
- GraphQL – Used as an integration layer to other systems to read data from
Through collaboration with our DevOps team you will likely also be exposed to, Docker, K8s, Operators, CI Pipelines
Collaboration is key, as your daily activities will occur in close cooperation with stakeholders.
About you
You most likely have a background in a data related field. You are used to analyzing and understanding large datasets. Solid experience with data analysis or java development is preferred. Up until now you might only have worked with batch processing, but want to expand your skillset with streaming – a journey we look forward accompanying you on!
Prior work exposure with Insurance domain is advantage but not a pre-requisite. As our data engineering tech stack is heavily focused around Kafka and streaming, it is an advantage if you can demonstrate experience with some of these.
Great curiosity and an eternal willingness to invest in your own development are important to us. We are constantly looking for smarter ways of working and your contribution is important.
Tryg has teams in several countries, so a natural curiosity and understanding of cultural differences are expected to succeed in the role.
Curious?
If you recognize yourself in the description, please send your application as soon as possible. Application deadline: July 8th 2022. Interviews are held on an ongoing basis.
If you have questions or want to know more, you are welcome to contact Shaji Balakrishnan at +45 2058 5983, shaji.balakrishnan@tryg.dk or Andreas Harmuth at +45 2912 9070, andreas.harmuth@tryg.dk.
We look forward to receive your application