IBM Big Data Engineer / Architect - Data Platform Services in Washington, District Of Columbia

Job Description

At IBM Global Business Services (GBS), we partner with Fortune 1000 clients to deliver real business value by bringing together the world’s largest consulting practice with industry-leading research capability, enriching business consulting with advanced research, analytics and technology, and teaming on all phases of engagement to plan, build, and implement advanced business solutions. We establish new, flexible and iterative approaches that only IBM can offer through our unique combination of skills, experience and capabilities, leveraging the proven roadmaps and frameworks we have developed across our 17 industries. Additionally, we apply IBM’s global expertise and local capabilities through our unique global delivery network combined with our teams in over 170 countries to provide our clients with an integrated approach to business design and execution, and turning strategies into actions.

The Data Platform Services practice area is seeking to hire a Big Data Engineer / Architect that will shape the design and implementation of Big Data projects as part of multi-disciplinary technical teams. This opening is a cross sector role.

Responsibilities:

  • Lead definition and socialization of end to end big data enablement solution architecture

  • Engage in direct discussions with senior client architecture executives

  • Be accountable for creating end-to-end solution design and development approach in a Hadoop/Spark environment

  • Be accountable for integration solution design and development for integration Hadoop/Spark environments with analytic platforms (i.e. SAS, SPSS) and Enterprise Information Management (EIM) and Data Warehouse (DW) platforms

  • Design, test, and continuously improve performance of Hadoop/Spark based solutions

  • Utilize distributed/parallel processing for information management solution design and development

  • Guide development, coaching and leadership through all project phases

  • Provide advisory help in selecting products and components as part of sales solutioning

  • Create new methods for Big Data and lead teams that are developing accelerators

  • Leverage technical expertise with relational databases (i.e. Oracle, SQL) to apply to Big Data projects

You will be successful in this role if you enjoy problem solving and utilizing consulting skills. Team leadership experience is preferred.

BENEFITS

Health Insurance. Paid time off. Corporate Holidays. Sick leave. Family Planning. Financial Guidance. Competitive 401K. Training and Learning. We continue to expand our benefits and programs, offering some of the best support and guidance and coverage for a diverse employee population.

  • ibm.com/employment/us/benefits/

  • ibm.com/press/us/en/pressrelease/50744.wss

CAREER GROWTH

Our goal is to be essential to the world, which starts with our people. Company wide we kicked off an internal strategy program called Go Organic. At our core, we are committed to believing and investing in our workforce through:

  • Skill development: helping our employees grow their foundational skills and promoting internally

  • Finding the dream job at IBM: navigating our company with the potential for many careers by channeling an employee's strength and career aspirations

  • Diversity of people: Diversity of thought driving collective innovation

CORPORATE CITIZENSHIP

With an employee population of 375,000 in over 170 countries, amazingly we connect, collaborate, and care. IBMers drive a corporate culture of shared responsibility. We love grand challenges and everyday improvements for our company and for the world. We care about each other, our clients and the communities we live, work, and play in!

  • ibm.com/ibm/responsibility/initiatives.html

  • ibm.com/ibm/responsibility/corporateservicecorps/

(2252) CPTCBDSDP

Required Technical and Professional Expertise

  • At least 5 years of experience in the Hadoop platform

  • At least 5 years of experience in data architecture

  • At least 5 years of experience working in distributed cluster environment

  • At least 5 years of experience in use of open source tools such as: Hadoop, Sqoop, Hive, HBase, Spark, Flume, Storm, Python, Kafka, Hortonworks, Apache, or Cassandra

  • At least 5 years of experience in designing large data warehouses with working knowledge of design techniques such as star schema and snowflake

  • At least 5 years of experience in various information modeling techniques.

  • At least 5 years of experience in a consulting environment

Preferred Tech and Prof Experience

  • At least 2 years of experience with hands on ETL script development and batch processing

  • At least 7 years of experience in data architecture

EO Statement

IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.