ETL and Big Data/Hadoop/Spark Developer

Job Information

Job Location: McLean, Virginia

 Develop, modify, or update applications used by business units or infrastructure units. Lead, or play lead technical role in development teams’ efforts to determine unit needs and business processes that are automated by the application. Participate in or review all of the steps in the software development life cycle to create and modify the software.


Key Job Functions 

  • Participate with team of technical staff and business managers or practitioners in the business unit to determine systems requirements and functionalities needed in large/complex development project.
  • Assess and develop high level design requirements for project and communicate in writing or in meetings with development team. Assess detailed specifications against design requirements.
  • Review coding done to advance application upgrade, extension, or other development. Analyze application for data integrity issues.
  • Develop test protocols or plan for testing revised application and review test results.
  • Serve as project lead or lead technical staff in course of application development project
  • May mentor less experienced technical staff; may use high end development tools to assist or facilitate development process.


Minimum Experience

  • 4+  years of related experience  
  • Bachelor’s Degree or equivalent required


Specialized Knowledge and Skills

  • 4+ years of hands on experience and strong and deep knowledge of Java application development
  • 4+ years of hands on experience in LINUX, Java/J2EE, SOA and Oracle platforms
  • Strong Informatica ETL development experience
  • Hands-on ELT experience
  • Excellent PL/SQL Oracle development experience
  • Minimum 1 year of experience developing REST Web Services
  • Experience processing large amounts of structured and unstructured data. MapReduce experience is a plus
  • Experience with Spark APIs, SparkSQL and DataFrames
  • 1-2 years of building and coding applications Using Hadoop Components – HDFS, Spark, Hbase, Hive, Sqoop, Kafka, Storm, YARN, HiveQL, SparkSQL. Scala experience is a plus
  • Experience with Apache NiFi highly desired
  • Hands-on experience with NoSQL databases
  • Hands-on expertise with CI/CD, DevOps, Jenkins, Maven or Ant, Atlassian stack


CLICK HERE to download the job application and email to with your resume.