Software Engineer

Click here to Apply Online

Job Description

Skills and Competencies

  • 5+ years of professional software engineering experience with a proven track record of building complex, scalable systems.
  • Expert-level proficiency in designing and implementing data processing solutions using Apache Spark, with strong skills in both Python (PySpark) and Scala.
  • Demonstrated experience in building, deploying, and managing data streaming pipelines using Apache Kafka and its ecosystem (e.g., Kafka Connect, Kafka Streams).
  • Solid understanding of and practical experience with big data technologies and concepts (e.g., the Hadoop ecosystem—HDFS, Hive, distributed computing, partitioning, file formats like Parquet/Avro).
  • Proven ability to work effectively in an Agile/Scrum development environment, participating in sprints and related ceremonies.
  • Demonstrated ability to work independently, manage priorities, and deliver end-to-end solutions with a strong focus on automated testing and quality assurance.
  • Excellent problem-solving, debugging, and analytical skills.
  • Strong communication and interpersonal skills.

 

Preferred Qualifications & Skills

  • Experience with cloud-based data platforms and services (e.g., AWS EMR, S3, Kinesis, MSK; Azure Databricks, ADLS, AWS Glue).
  • Experience with workflow orchestration tools (e.g., Airflow, Dagster, Prefect).
  • Experience with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes).

 

Education

  • Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related field (or equivalent practical experience).

 

Responsibilities

  • Design, develop, implement, and maintain robust, scalable, and efficient batch and real-time data pipelines using Apache Spark (Python/PySpark and Scala) and Apache Kafka.
  • Work extensively with large, complex datasets residing in various storage systems (e.g., data lakes, data warehouses, distributed file systems).
  • Build and manage real-time data streaming solutions to ingest, process, and serve data with low latency using Apache Kafka.
  • Optimize data processing jobs and data storage solutions for performance, scalability, and cost-effectiveness within big data ecosystems.
  • Implement comprehensive automated testing (unit, integration, end-to-end) to ensure data quality, pipeline reliability, and code robustness.
  • Collaborate closely with data scientists, analysts, software engineers, and product managers to understand data requirements and deliver effective solutions.
  • Actively participate in Agile/Scrum ceremonies, including sprint planning, daily stand-ups, sprint reviews, and retrospectives.
  • Take ownership of assigned tasks and projects, driving them to completion independently while adhering to deadlines and quality standards.
  • Troubleshoot and resolve complex issues related to data pipelines, platforms, and performance.
  • Contribute to the evolution of data architecture, standards, and best practices.
  • Mentor junior engineers and share knowledge within the team.
  • Document technical designs, processes, and implementation details.

 

About the Team

Our Engineering team is responsible for developing the software systems and digital experiences that power Moody’s products and services.

By joining our team, you will be part of exciting work, including:

  • Building scalable and reliable full-stack solutions that enhance the user experience.
  • Shaping the architectural direction of key platforms and contributing to technical strategy.
  • Collaborating with global teams to deliver innovative technology in a high-impact environment.
Apply Now
Click here to Apply Online
  • Posted: 06/26/2025
  • Job Reference #: 9617
  • Location(s):
    • 186-22-49 8th B Cross Rd,Bengaluru Karnataka
  • Line of Business: Data Estate(DE)
  • Job category:
    • Engineering & Technology
  • Experience Level: Experienced Hire