I am CEO of Soniox. We build artificial intelligence and infrastructure for audio and sound understanding. Send us an email if you are interested in joining our team.
From 2015 to 2020, I worked at Facebook on Artificial Intelligence and Applied Machine Learning. My work focused on natural language understanding and speech problems.
I received my M.S. at the University of Utah, where I worked on learning concept-level representations and short text understanding. I did my undergraduate in mathematics and computer science at the University of Ljubljana in Slovenia. During my undergraduate and M.S. I worked on various research problems with internships at Stanford and Google.
In the winter, I love to ski and make many strong short turns on the ski slopes. I also enjoy running, cycling and swimming.
Founded and Started Soniox
Soniox mission is to accelerate the adoption of speech-based applications and spark innovation of human-machine voice interaction. We have developed the most accurate speech recognition system and made it freely available for anyone to use.
Developed a novel neural network model for speech activity detection for Facebook videos that recognizes human speech segments in the audio stream. The model is fast and accurate. Integrated the model into production and achieved 40%+ reduction in transcribed audio data on Facebook videos without affecting the Word Error Rate.
Co-organized the Machine Intelligence at NIPS 2016. The workshop aimed to stimulate theoretical and practical advances in the development of machines endowed with human-like general-purpose intelligence, focusing in particular on benchmarks to train and evaluate progress in machine intelligence.
Designed and developed DeepText models for intent classification and slot extraction for Facebook Messenger. Applied DeepText models to recognize ride intents for Uber, Lyft, and Taxi in Messenger, and extract object and price in the for-sale Facebook posts.
M.S. Thesis: Concept Aware Co-occurrence and its Applications
Studied the problem of learning concept-level representations from large amount of unstructured text data. Developed a structured prediction model for short text understanding: segmentation and disambiguation of phrases into concepts with limited syntax and context information.
Internship at Google Research with Haixun Wang. Developed models for search query understanding: learning segmentation and disambiguation of phrases in queries. The model improved the understanding of intents and products in the Google Shopping search queries.
Undergraduate Thesis:Network structural properties and their application to missing property prediction.
Studied the problem of predicting missing relation-types for objects in large-scale knowledge bases, DBpedia and Freebase (Google Knowledge Graph).
Designed and developed a library for generic implementation of data structures that support complex queries on data (e.g. combination of Balanced Tree with Hash Table and Double Linked List). The library achieves high performance and low memory usage, and can be easily embedded into other projects.
Research internship at Stanford with Prof. Jure Leskovec. Worked on discovery of network motifs (statistically significant sub-graphs), and used the network motifs with SVM to predict the missing links in information networks.
First year undergraduate research project: A New Algorithm for Finding Frequent Items in Streams of Data. We present a new algorithm for finding frequent items in a stream of data that requires a small fraction of resources compared to the total quantity of data.