At Deepgram, we’re on a mission to be the speech company — starting with providing the world’s best automatic speech recognition for business. We’ve scrapped traditional speech recognition methods for patented end-to-end deep learning speech models built specifically for the needs of each customer. This allows enterprise data science and product teams to confidently tackle initiatives like building voice assistants for doctors, using AI to assist call center agents, automatically detecting good candidates to hire, and more.
Because we serve a diverse range of customers, our team prides itself on our willingness to learn and our tenacity to keep improving. This attitude has led to exciting growth and a recent fundraising round.
Deepgram is the leader is next-generation speech analytics, holding multiple patents on end-to-end deep learning for on GPUs. At Deepgram, you would be designing, training, and deploying state-of-the-art models on the latest GPU hardware, as well as exploring new learning techniques, data analytics, and software architectures that enable Deepgram to continue delivering the most accurate, scalable, and robust speech recognition available today. Ideal candidates would thrive on a faced-paced, impact-driven environment where learning new skills on-the-fly is encouraged.
Understanding the latest advances in deep learning and speech analytics.
Developing new, or maturing existing, models for speech analytics.
Analyzing new datasets for untapped potential.
Working with customers to customize models.
Deploying new models to production.
Configuring systems (software, hardware, network, etc.) for optimal machine learning performance.
Familiarity with one or more of the popular deep learning frameworks: PyTorch, TensorFlow, Keras, etc.
Competence with Python, and preferably knowledge of C, C++, and/or Rust.
Familiarity navigating UNIX-style systems.
Comfortable communicating and brainstorming in groups, and leading new research thrusts.
Comfortable working directly with customers to understand how to improve deep learning offerings.