General Assembly has flagged the Data Science Instructor job as unavailable. Let’s keep looking.
Education
Benefits
Qualifications
Skills

Data Science Engineer

We are seeking a talented Data Science Engineer to join our team and contribute to the development and implementation of advanced data solutions using technologies such as AWS Glue, Python, Spark, Snowflake Data Lake, S3, SageMaker, and machine learning (M/L).

As a Data Science Engineer, you will play a crucial role in designing, building, and optimizing data pipelines, machine learning models, and analytics solutions. You will work closely with cross-functional teams to extract actionable insights from data and drive business outcomes.

Essential Functions and Responsibilities:

  • Develop and maintain ETL pipelines using AWS Glue for data ingestion, transformation, and integration from various sources.
  • Utilize Python and Spark for data preprocessing, feature engineering, and model development.
  • Design and implement data lake architecture using Snowflake Data Lake, Snowflake data warehouse and S3 for scalable and efficient storage and processing of structured and unstructured data.
  • Leverage SageMaker for model training, evaluation, deployment, and monitoring in production environments.
  • Collaborate with data scientists, analysts, and business stakeholders to understand requirements, develop predictive models, and generate actionable insights.
  • Conduct exploratory data analysis (EDA) and data visualization to communicate findings and trends effectively.
  • Stay updated with advancements in machine learning algorithms, techniques, and best practices to enhance model performance and accuracy.
  • Ensure data quality, integrity, and security throughout the data lifecycle by implementing robust data governance and compliance measures.

Qualifications and Education:

  • Bachelor's degree or higher in Computer Science, Data Science, Statistics, or related field.
  • Proficiency in AWS services such as Glue, S3, SageMaker, and Snowflake Data Lake with 5-6 years of experience.
  • Strong programming skills in Python for data manipulation, analysis, and modeling.
  • Experience with distributed computing frameworks like Spark for big data processing.
  • Knowledge of machine learning concepts, algorithms, and tools for regression, classification, clustering, and recommendation systems.
  • Familiarity with data visualization tools with Tableau for creating meaningful visualizations.
  • Excellent problem-solving, analytical thinking, and communication skills.
  • Ability to work collaboratively in a team environment and manage multiple priorities effectively.
  • Experience deploying machine-learning models in production environments and monitoring their performance.
  • Knowledge of MLOps practices, model versioning, and automated model deployment pipelines.
  • Familiarity with SQL, NoSQL databases, and data warehousing concepts.
  • Strong understanding of cloud computing principles and architectures.
  • Certifications in AWS, Python, Spark, or related technologies.

About the Company:

The Plymouth Rock Company and its affiliated group of companies write and manage over $1.8 billion in personal and commercial auto and homeowner’s insurance throughout the Northeast and mid-Atlantic, where we have built an unparalleled reputation for service. We continuously invest in technology, our employees thrive in our empowering environment, and our customers are among the most loyal in the industry. The Plymouth Rock group of companies employs more than 2,000 people and is headquartered in Boston, Massachusetts. Plymouth Rock Assurance Corporation holds an A.M. Best rating of “A-/Excellent”.

#LI-DNI

Read Full Description
Confirmed 5 hours ago. Posted 13 days ago.

Discover Similar Jobs

Suggested Articles