DATA ENGINEER
Full-Time and Internship
Do you love data analytics, build data pipelines in your sleep, and enjoy problem solving? Data Science Dojo is looking for a Data Engineer to join our growing team to help construct said data pipelines, investigate new technologies, and exude a never-ending passion for big data. They should also have a working knowledge of data architecture and database design, love automation, and be able to collect data in the absence of data (web scraping). Our ideal Data Engineer will have a familiarity in machine learning, advanced knowledge of real-time data streams and data engineering, as well as cloud computing/distributed systems. Ultimately, your thirst for knowledge in all of the above will be the greatest element we’re looking for.
Job Responsibilities
- Develop reliable data pipelines that convert data into insights
- Explore and document new technologies in coordination with technical writers
- Share data engineering insights with a learning community
- Explain data engineering processes through written tutorials
- Research, analyze, and evaluate new big data technologies
Ideal Applicants shoud have
- Experience with development on the Azure cloud platform
- Source code proof of MapReduce, Hadoop, and Spark
- Veteran experience of C#, Scala, Azure PowerShell, or Java
- Development experience on IntelliJ or Visual Studio is preferred
- Degree in Computer Science, Electrical and Computer Engineering preferred
- Must be eligible to work in the United States
- Ability to impact and influence product strategy without authority, drive cross group collaboration, successfully advocate for customers and apply critical thinking and creative solutions in a dynamic highly ambiguous working environment.
- Experience using quantitative and qualitative data to make decisions and recommendations, to build and communicate plans, and to monitor and measure progress against goals.
- Strong sense of pride and personal accountability for the end-to-end product/service quality, completeness, and resulting in good quality user experience.
- Self-starter with a proven ability to work in fast-paced environment; possesses a “whatever it takes mentality,” able to quickly and easily adapt to changing mandates and priorities.
- Must demonstrate a sense of urgency around critical priorities, but work calmly, independently and effectively under pressure.
Basic Qualifications
- A Bachelors or higher degree in Statistics, Mathematics, Computer Science, Electrical and Computer Engineering, or Economics
- Knowledge of some of the following: R, Python, Hive, Pig, Mahout, Java, C#
- Relevant coursework in machine learning, data science, data mining, big data, and/or statistical inference
- Working knowledge of big data concepts like Hadoop, MapReduce, and HDFS
- Familiarity with data tools and services in Azure, AWS, and/or GCP eco-system is preferred
- 1-4 years or more demonstrated experience using quantitative and qualitative data to make decisions and recommendations, to build and communicate plans, and to monitor and measure progress against goals.
- Demonstrated experience using quantitative and qualitative data to make decisions and recommendations, to build and communicate plans, and to monitor and measure progress against goals.