Zeta.tech |
Zeta intends to replace many legacy systems in use by banks for processing payments. Banks need to leapfrog into an era of connected devices and omnipresent commerce. Banking should become an integral part of the commerce and enterprise systems, thereby enabling seamless consumer and business transactions. The average number of interactions an account holder may have with a bank would increase to 12-15 per day in the near term from the current average of 3-4 transactions per month. These could grow to an unimaginably large number in future. Most banking systems aren’t designed for this scale. In many ways, these systems are limiting the imagination of possibilities. We want this changed fundamentally, in an inter-operable and regulatory compliant manner.
What is the Job like?
- Building highly-scalable and secure data infrastructure that is used by multiple teams and services.
- Primary owners of one or more components of the platform and will drive innovation in your area of ownership
- Building data flow and transformation systems for various data stores such as analytics & BI, logging, application metrics and click stream events.
- Building tools and applications that reduce manual efforts and eliminate friction to access data and manage data infrastructure.
- Contributing to data modelling across various services in the data platform.
- Own and operate various components in the data platform.
- Working in a cross-functional team and collaborate with peers from different functions
- Participating actively in recruitment and nurturing of engineers as awesome as you
- Review and influence new evolving design, architecture, standards, and methods with stability, maintainability, and scale in mind
- Identify patterns and provide solutions to a class of problems
- Research, evaluate and socialize new tools, technologies, and techniques to improve the value of the system
- Be able to multi-task, prioritize and handle dependencies with minimal oversight
Who should apply?
- Bachelor’s/Master’s degree in engineering (computer science, information systems) with 4+ years of experience building data warehouse and BI systems.
- Experience in Apache Spark, Kafka, RDBMS, Hadoop / Presto / AWS Athena.
- Good understanding of nuances of distributed systems, scalability, and availability
- Strong database and storage fundamentals, including a good understanding of database internals (RDBMS).
- Strong Java skills, including experience working on large scale applications
- In-depth understanding of concurrency, synchronization, NIO, memory allocation and GC
- Experience working on real time streaming solutions using F link / Spark or Kafka streams.
- Experience with IaaS clouds like AWS/Google Cloud, Azure, OpenStack, etc.
- Experience in working with Message Brokers and Application Containers
- Great ability to mentor and train other team members
Good to have
- Experience working with AWS Redshift, Kafka connect, Post grade
- Ability to write and maintain complex SQL code.
0 Comments