Speech is only a small part of acoustic information, and what we hear everyday is a lot more than that. Cochl provides sound understanding AI that can understand any sounds like a human.
We have an end-to-end, automatic pipeline for audio data collection, quality validation, pre-processing, augmentation, model training, testing, post-processing, and deployment. Completely unified.
We don’t require user to use specific hardware. It is purely software-based, and we make our system to handle a various kind of microphone and recording environment automatically, achieved by extensive research on generalization techniques.
No handcrafted, rule-based, or hybrid. Our system is end-to-end, fully deep learning-based algorithm that can mimic auditory perception of humans.
Our sound AI system can be running on both cloud and edge in real-time, utilizing all the benefits from each environment. This means that it can be easily adopted literally anywhere.
We are a back-to-back winner of IEEE DCASE which is the largest challenge in this field, and won Kaggle for “General purpose audio tagging” out of 556 teams.
We study and experiment many projects and there are many ways to explore our potentials in Cochl.Labs. Wherever, and whenever try out our web demos so that you can experience our technology.