AI Safety Data Scientist, Trust and Safety

2 Hours ago • 1-2 Years • Data Analyst

About the job

Job Description

The AI Safety Data Scientist role within Google's Trust & Safety team focuses on developing scalable safety solutions for AI products. This involves leveraging advanced machine learning and AI techniques to analyze Google's protection measures, identify shortcomings, and provide insights for continuous security enhancement. The position requires crafting data stories for various stakeholders, developing automated data pipelines and dashboards, and working with sensitive content. Responsibilities include applying statistical and data science methods to improve AI safety, crafting compelling data stories for senior leadership, building automated data pipelines and dashboards, and working with potentially sensitive or upsetting content.
Must have:
  • 1+ years data analysis experience
  • 1+ year project management experience
  • Proficient in SQL, R, Python, or C++
  • Machine learning application experience
  • Excellent problem-solving skills
Good to have:
  • Quantitative degree (CS, Stats, Math)
  • 2+ years data science experience
  • Experience in abuse/fraud disciplines
  • Experience with web security, harmful content moderation, threat analysis
Not hearing back from companies?
Unlock the secrets to a successful job application and accelerate your journey to your next opportunity.

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 1 year of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
  • 1 year of experience managing projects and defining project scope, goals, and deliverables.

Preferred qualifications:

  • Bachelor’s degree in a quantitative discipline (e.g., Computer Science, Statistics, Mathematics, Operations Research).
  • 2 years of experience in a data analysis or data science setting.
  • Experience with one or more of the following languages: SQL, R, Python, or C++.
  • Experience in abuse and fraud disciplines, focused on web security, harmful content moderation, and threat analysis.
  • Experience applying machine learning techniques to datasets.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

About the job

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

The AI Safety Protections team safeguards Generative AI experiences across Google products for all users, including enterprise clients. By developing and deploying safety classifiers, both server-side and on-device, the team ensures the distribution of safe and policy-compliant content. They take a data-driven approach to problem-solving and exceed in employing cutting-edge AI solutions for enhanced safety.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

Responsibilities

  • Develop scalable safety solutions for AI products across Google by leveraging advanced machine learning and AI techniques.
  • Apply statistical and data science methods to examine Google's protection measures, uncover potential shortcomings, and develop insights for continuous security enhancement.
  • Drive business outcomes by crafting compelling data stories for a variety of stakeholders, including senior leadership.
  • Develop automated data pipelines and self-service dashboards to provide timely insights at scale.
  • Work with sensitive content/situations and may be exposed to graphic, controversial, or upsetting topics/content.
View Full Job Description

Add your resume

80%

Upload your resume, increase your shortlisting chances by 80%

About The Company

A problem isn't truly solved until it's solved for all. Googlers build products that help create opportunities for everyone, whether down the street or across the globe. Bring your insight, imagination and a healthy disregard for the impossible. Bring everything that makes you unique. Together, we can build for everyone.

View All Jobs

Get notified when new jobs are added by Google

Similar Jobs

Metadrob - Unreal Engine Developer

Metadrob, India (On-Site)

Qualcomm - Senior Engineer - Voice AI Power

Qualcomm, India (On-Site)

Electronic Arts - Security Software Engineer

Electronic Arts, Canada (On-Site)

ION - Data Entry Specialist

ION, United States (Remote)

Publicis Groupe - Data Strategy Lead

Publicis Groupe, Colombia (Hybrid)

Amgen - Data Scientist

Amgen, India (On-Site)

Meta - Integrity Science Engineer

Meta, United States (Remote)

Luxoft - Data Business Analyst

Luxoft, (On-Site)

Get notifed when new similar jobs are uploaded

Similar Skill Jobs

Get notifed when new similar jobs are uploaded

Data Analyst Jobs

Get notifed when new similar jobs are uploaded