Dun & Bradstreet is seeking a Software Engineer II to design, build, and deploy new data pipelines within their Big Data Ecosystems using tools like Streamsets/Talend/Informatica BDM. The role involves designing ETL/ELT data pipelines, working with data lakes and modern data warehousing practices. Key responsibilities include expert-level programming in Python and Spark, utilizing Cloud Based Infrastructure on GCP, and experience with ETL tools for complex loads and cluster batch execution. The engineer will work with various databases like DB2, Oracle, and SQL Server, handle web service integrations, and gain exposure to Apache Airflow for job scheduling. A strong understanding of Big Data Architecture (HDFS), cluster management, and performance tuning is essential. The role also includes creating Proofs of Concept (POCs), implementing capabilities in production, managing workloads, enabling optimization, and participating in planning and skill development activities.