Not getting noticed by recruiters?
Personalise & tailor resume for every job you apply
Visit any jobs page and click on magic cap button to generate a tailored resume from your regular resume
Use ChatGPT to customise your resume for every job that you apply to
Use AI to auto fill job forms
Ask for Referral for any job post

Sharad Gupta
Senior Data Engineer experienced in building scalable end-to-end data pipelines using various big data platforms.
About
Sharad Gupta is a Data Engineer 3 at PayPal with over 5 years of experience in building end-to-end Big Data pipelines (batch and streaming) using various big data platforms. He is a Big Data professional who is well-versed in agile methodology, has a deep understanding of SDLC, and has experience working with international clients in small to large teams. Currently, he is working as a full-stack big data developer. Sharad has worked at Clairvoyant as a Big Data Developer where he was responsible for designing and implementing spark streaming application/frameworks for new subsidiaries. He also developed a common product framework with Kafka and Spark that is being used as a class2 data processor across all data pipelines. Additionally, he designed and coded for a data ingestion framework widely used across all domains, maintained data quality, and onboarded new subsidiaries/clients in the streaming pipeline. He also worked on maintaining streaming pipelines based on Kafka, writing code for data quality utilities and automation using spark and Scala, and code reviewing, code design pattern, code maintenance, and deployment. He set up Docker and spark application deployment in the stage environment from scratch. At Datametica, Sharad worked as a Senior Big Data Engineer in the insurance domain. He migrated the spark 1.6 code to 2.2.0 with code refactoring, changed the data pipeline flow with new business rules from hive to Spark-SQL with automation, and implemented new additions of modules in the old spark code. He also implemented data quality utility automation using spark and Scala, created BigQuery tables automation, and deployed spark applications in the DataProc cluster. Sharad also worked on code migration from on-premise to GCP, designed and developed data layers from scratch, developed hive queries with complex business requirements, and implemented the hot and cold concept to minimize the downtime for the end-user in reporting. He developed spark code using Scala for incremental load and historical load of data and developed codes in spark to read and write into RDBMS like oracle and NoSQL like HBase. Additionally, he developed wrappers in shell to automate the process for different layers of the project to load data and developed file-based triggering, event-based triggering, and different types of oozie workflows. Sharad also developed codes for maintaining stats and logs for every run of every job, history, or incremental and automated different testing processes for different layers to fasten the process. He used GCP DataProc, FileStore, BigQuery and Cloud
Education Overview
• delhi university
• daffodil glorious higher secondary school
Companies Overview
• zscaler
• paypal
• clairvoyant
• datametica
• wipro
Experience Overview
9.7 Years
Find anyone’s contact

Experience
No data found
Skills
Boost your visibility and stand out to employers with referrals from your LinkedIn connections.
Contact Details
Email (Verified)
shaXXXXXXXXXXXXXXXXXXXomMobile Number
+91XXXXXXXX17Education
No data found
Frequently asked questions
Find anyone’s contact and let Weekday reach out to them on your behalf
Start hiring nowStop manually filling job applications. Use AI to auto-apply to jobs
Look for jobs now