Job Description
Charger Logistics is a world class asset-based carrier. We specialize in delivering your assets, on time and on budget. With the diverse fleet of equipment, we can handle a range of freight, including dedicated loads, specialized hauls, temperature-controlled goods and HAZMAT cargo.
Charger logistics invests time and support into its employees to provide them with the room to learn and grow their expertise and work their way up. We are entrepreneurial-minded organization that welcomes and support individual idea and strategies. Charger Logistics is seeking a well-rounded individual able to work in a fast-paced environment to join our team at the company’s office in Brampton, Ontario.
The data engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
Job Duties:
- Create and maintain optimal data pipeline architecture.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Design, implement, and maintain large-scale batch and real-time scalable data pipelines with complex data transformations.
- Perform data wrangling to transform and map data from raw data forms into formats more appropriate and valuable for analytics.
- Write and optimize complex queries on large data sets.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
- Work with business teams to translate functional requirements into technical requirements.
- Conduct business and functional requirements gathering and provide projects estimates.
- Develops workflows and tools that automate data loading processes and help ensure data quality and integrity.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Requirements:
- A bachelor’s degree in computer science, Software Engineering or equivalent.
- 4+ years of experience as a Data Engineer or a similar role
- 4+ years of industry experience in software development, data engineering, or related field with a track record of manipulating, processing, and extracting information from large datasets.
- Advanced working knowledge of SQL (writing and debugging)
- Excellent SQL, Python and PySpark skills.
- Proven ability to prepare, maintain and publish ETL documentation, including source-to-target mappings and business-driven transformation rules.
- Strong analytical skills when working with unstructured data sets.
- Nice to have knowledge in Big Data Technologies e.g. Hadoop, Spark, Hive
- Experience working with Tableau is an asset.
- Experience with Agile development methodologies.
Benefits:
- Competitive Salary
- Healthcare Benefit Package
- Career Growth