InfoBeans at Knowledge 26 Try Expona 2.0 Contact Us menu-bars menu-close

Sr. Data Engineer

Role:

Sr. Data Engineer

Location:

Bangalore

Experience:

5+ years

Key Skills:

Datalake house, Kafka, Oracle OCI, OCI, Medallion Architecture, PySpark, Python, SQL, Airflow, AWS REST APIs

What will your role look like: 

1. As a Senior Data Engineer, you will work on the design and implementation of enterprise-scale data platforms.

2. You will be working in projects to build scalable data pipelines and platforms using Oracle Cloud Infrastructure (OCI)and implementing Medallion Architecture for modern data lakehouse solutions.

3. You will be Interacting with client to understand and gather requirements.

4. You will be responsible for developing robust data ingestion, transformation, and curation pipelines that power enterprise analytics and AI/ML workloads.


Why you will love this role:

1. Develop and maintain data pipelines across bronze, silver, and gold layers using Medallion Architecture principles.

2. Build scalable and secure data lakehouse solutions on Oracle Cloud Infrastructure (OCI).

3. Implement ETL/ELT workflows using Oracle Data Integration tools (ODI, Data Transforms, GoldenGate).

4. Optimize data processing using Spark, Python, and SQL for batch and streaming data.

5. Collaborate with data architects, analysts, and business stakeholders to deliver high-quality datasets.

6. Ensure data quality, lineage, and governance across all layers of the data platform.

7. Monitor and troubleshoot data pipeline performance and reliability.


We would like you to bring along: 

1. 5+ yrs of Experience designing and implementing Build scalable and secure data lakehouse solutions with Medallion Architecture- Bronze (raw), Silver (cleansed), and Gold (curated) data layers.

2. Having domain experience of Oracle AI Data Platform, Oracle Autonomous Database, Oracle Cloud Infrastructure (OCI), OCI Data Integration (OCI DI), OCI Data Flow, Oracle GoldenGate, Oracle Data Transforms

3. Having 5+ years of project experience on Spark (PySpark), SQL, Python, Airflow, Kafka, REST APIs

4. 2 to 4 yrs of Dimensional modeling, Data profiling, Data Validation, Schema evolution, Log Extraction experience.

5. Applied knowledge on governance with RBAC, encryption, audit logging, metadata management.

6. Extensive experience in working on with PySpark development projects and designing data lake house systems with heterogeneous source systems and real time streaming.

Get updates. Sign up for our newsletter.

contact-bg

Let's create WOW together!

Talk to Us