American IT Resource Group, Inc.

Job Description:

Delivery Model - Good knowledge in Agile/ Scrum and SDLC methodologies. - Software Engineering Practices: TestNG, Mockito, JUnit, JIRA, Splunk, Log4j, TDD. - Dev Ops: GitHub, uDeploy, Jenkins for Continuous Integration / Continuous Deployment (CI/CD), Stash, Autosys Jobs, Maven and Gradle. Technology aspects - Big data & Hadoop – Spark 2.2.4, Scala 2.12, Hive, Drill, Redhat Ceph, Open IO, Scality Ring, Kafka, Solace, Hue, Zookeeper. - Java – Core Java, J2EE, Multithreading, Java 1.8, Spring, Spring MVC, Spring boot, Spring Hibernate, Spring (JPA, Rest) Microservices Swagger API. - Cloud Computing – AWS, S3 compatible API’s, Knowledge on Kubernetes and containerization. - Database - SQL Server, Oracle 10g & 11g, DB2, Teradata. - No SQL DB – Hbase, Mapr DB, Mongo DB.


• Adopt full adoption of modern software engineering and delivery practices with Design principles and patterns, Agile, Big data Spark using Scala for Hadoop development, Stateless Design Java REST/ Spring Boot based Microservices, Containerization etc. • Manage relationships with key technology, business partners and product owners and other stake holders. • Design and develop security policies, custom integration with IAM Systems (like Okta AD, PingID). • Build integration with Single Page Apps, Mobile Apps, Third Party Systems using OAuth2, SSO. • Build API, Common reusable component and automation framework for Client’s intranet access to physical services running in sandbox or test environment. • Bring deep knowledge and experience in designing for and implementing solutions in the Cloud (AWS). • Ensure that CI/CD pipeline covers lifecycle management needs for enterprise use cases (like covering APIs deployed in the legacy API Management environment).


• 3+ years of experience in building software enterprise software applications through Java J2EE based Technology Stack and Big data Ingestion framework and distribution framework using Hadoop, Spark and Scala. • 2+ years’ experience working with Spark using Scala and Java on MapR Distribution on MapR FS, Hortonworks, and Big Data tools including Hive, Drill, Hue, Kafka, Solace with HBase, Mongo DB MapR-DB (Binary & JSon) No SQL databases. • Having very strong knowledge in working with Object Oriented programming languages such as Java, Microsoft, Spring MVC, Spring Boot Microservices application. • Strong experience in relational database concepts, SQL, and procedural languages; object-oriented design; Enterprise, distributed computing and Web-based computing methods; and design patterns. • Must understand the concepts of SOAP and REST services as well as both XML and JSON message formats. • Bring deep knowledge and experience in designing for and implementing solutions in the Cloud (AWS). • Proficient in Continuous Integration (CI) and Continuous Deployment (CD) pipelines using Jenkins CI. • Strong analytical and problem-solving skills as well as the ability to decompose complex problems and perform root cause analyses. • Work in a collaborative environment. • Experience with various testing methodologies and strategies: Test Driven Development (TDD) implemented with JUnit, Mock objects, Stubs, Test suites, Test harness. • Experience working with the agile team tools (GitHub, JIRA, Scrum). • Experience working with Eclipse IDE and IntelliJ IDE with Maven and Gradle build tools. • Ability to self-organize, prioritize and handle multiple priorities without compromising on quality.