Data Engineering
Pipelines, orchestration, reliability
DATA ENGINEER • CLOUD • PIPELINES
I design reliable ETL workflows, improve data quality, and ship analytics-ready datasets using SQL, Python, Spark, Airflow and cloud services.
I’m a Data Engineer who enjoys building data systems that teams can trust. My work sits at the intersection of engineering and analytics — turning raw, inconsistent inputs into clean, query-ready datasets that power dashboards, reporting, and product decisions.
I care about the details that make pipelines production-ready: data contracts, validation rules, monitoring, and recoverability. I also love simplifying complexity , designing workflows that are easy to maintain, easy to debug, and built to scale.
Beyond tools and pipelines, I value clarity and ownership. I take responsibility for the full lifecycle of data from ingestion to consumption, ensuring definitions are aligned, assumptions are documented, and downstream users can rely on the data. My goal is to make data products feel boringly reliable.
Define clear inputs/outputs, add validation checks, and keep datasets reproducible.
Monitor freshness, anomalies, failures, and SLA risks so issues are caught early.
Use simple patterns, reusable components, and docs so pipelines scale with the team.
Work backward from the question and deliver clean tables that drive decisions.
Write crisp updates, align definitions, and make trade-offs visible to stakeholders.
Take responsibility for delivery, quality, and long-term reliability — not just code.
Pipelines, orchestration, reliability
Scalable, cost-aware systems
From raw data → decisions
New Jersey Institute of Technology
Punjab Engineering College
Hands-on credentials across data + analytics.
Want to collaborate or have a role in mind? Send a message — I’ll reply soon.