Say no to manually filling long application forms
Visit any careers page and a lightning button will pop up on any compatible page with a form
Use ChatGPT to auto-fill job forms
Ask for Referral for any job post
Rahul Dey
Data Engineer at JP Morgan Chase
About
Rahul Dey is a highly skilled Data Engineer with over 5 years of experience in designing and developing scalable and optimized batch and real-time data pipelines. He has a strong technical background and a passion for solving complex data engineering problems. Rahul has expertise in analyzing, designing, developing, testing, implementing, and maintaining data engineering solutions with timely delivery against set deadlines. He is well-versed in Agile methodology and has the ability to work independently, take full ownership, and collaborate with different engineering teams, business teams, and stakeholders. Rahul's technical skillset includes programming languages such as Python, SQL, Scala, and Shell Scripting. He is proficient in Big Data frameworks such as Hadoop, Apache Spark (Streaming & Batch), PySpark, Hive, Impala, Hue, HDFS, and Cloudera. He also has experience working with Elastic Stack, including Elasticsearch, Logstash, Kibana, and Beats. Rahul has expertise in Cloud platforms such as Azure and AWS, including Azure Data Factory, Azure Data Lake Gen2, Azure Blob Storage, Azure Databricks, Azure Synapse, Azure Log Analytics, S3, AWS EMR Clusters, AWS Redshift, AWS Lambda, AWS Cloudwatch, AWS Glue, AWS Data Migration Service, and AWS Quicksight. He is also proficient in automation/build tools such as Docker, Azure Bicep, and Azure ARM. Rahul uses PyCharm, IntelliJ, and Jupyter-Notebook as his preferred IDEs. He is well-versed in Version Control and Documentation tools such as BitBucket, GitHub, JIRA, and Confluence. Currently, Rahul is working as a Data Engineer at JP Morgan Chase, where he is responsible for designing and developing data pipelines. Prior to this, he worked as a Senior Data Engineer at Bosch, where he developed real-time and batch data pipelines using Apache Spark, optimized data pipelines for reducing overall latency, developed automated solutions for tracking latency and data loss in critical data pipelines, and created detailed documentation on System Architecture, Deployment Process, and Automated Solutions. He also worked as a Software Engineer at Attra, where he developed batch data pipelines using Apache Spark based on designs from senior data engineers and collaborated with different teams and data providers for understanding data granularity. Rahul holds a Bachelor of Engineering degree in Electronics and Communications Engineering from Vishveshwaraiah Technological University. His tech stack includes Data Engineering, AWS, Azure, Spark, Hadoop, SQL
Education Overview
• visvesvaraya technological university
Companies Overview
• jp morgan
• bosch
• attra a synechron company
• palle technologies
Experience Overview
7.1 Years
Find anyone’s contact
Experience
Skills
Boost your visibility and stand out to employers with referrals from your LinkedIn connections.
Amazon Athena
Amazon Relational Database Service (RDS)
Amazon S3
Amazon Web Services (AWS)
analytics
apache
Apache Hive
Apache Spark
architecture
AWS Lambda
Azure
Azure Data Factory
Azure Databricks
Big Data
Data Engineering
Data Pipelines
Databricks
Design
Elastic Stack (ELK)
Hadoop
HDFS
Hive
Identity & Access Management (IAM)
infra
Infrastructure
Logstash
Machine Learning (ML)
MapReduce
Microsoft Azure
Python
Scala
SQL
storage
test
testing
Contact Details
Email (Verified)
xxxxxxxx@xxxx.xxMobile Number
+91XXXXXXXXXXEducation
visvesvaraya technological university
Bachelor of Engineering - BE
2012 - 2016
Frequently asked questions
Find anyone’s contact and let Weekday reach out to them on your behalf
Start hiring nowStop manually filling job applications. Use AI to auto-apply to jobs
Look for jobs now