R

Big Data/Hadoop Engineer

Rackspace
On-site
India - Bangalore
All

We are hiring for Big Data / Hadoop Engineer


Exp - 3+yrs

Location - Bangalore ( 3days WFO)

Notice Period - Less than 30days only


.    Big Data Development: Design, build, and maintain robust and scalable data pipelines for ETL/ELT processes using tools within the Hadoop ecosystem.

.    Programming & Scripting: Develop complex data processing applications using Spark (with Python/Scala/Java) and implement efficient jobs using Hive and Pig.

.    Hadoop Ecosystem Mastery: Work extensively with core Hadoop components including HDFS, YARN, MapReduce, Hive, HBase, Pig, Sqoop, and Flume.

.    Performance Tuning: Optimize large-scale data processing jobs for speed and efficiency, troubleshooting performance bottlenecks across the cluster.

.    Data Modeling: Collaborate with data scientists and analysts to design and implement optimized data models for the data lake/warehouse (e.g., Hive schema, HBase column families).

.    Workflow Management: Implement and manage job orchestration and scheduling using tools like Oozie or similar workflow managers.

.    Operational Support: Monitor, manage, and provide L2/L3 support for production Big Data platforms, ensuring high availability and reliability.

\n


\n