People

Athanasios (Nassos) Katsamanis

Researchers
Institute for Language and Speech Processing
+30 210 6875405

Dr. Athanasios (Nassos) Katsamanis is a Principal Researcher at ATHENA R.C. since 2019 conducting research on multimodal speech processing and multimodal human-computer interaction. Earlier, he was the Tech Lead of Behavioral Signal Technologies, Inc. where he worked with a team of engineers on research and development of conversation analytics based on affective computing of spoken interactions. Nassos got his M.Eng (with highest honors) and PhD from the National Technical University of Athens (NTUA) in 2003 and 2009 respectively, and after that he worked as a Postdoctoral Research Associate at the Viterbi School of Engineering at USC for almost three years. He has conducted extensive research on computational modeling and understanding of human behavior based on multimodal processing and analysis of speech, facial expressions, and body language.

After his postdoc at USC, and for the following 3 years, he was a Research Associate at NTUA, working on audiovisual saliency and audiovisual speech synthesis in the frame of the Greek research project Cognimuse. In parallel, he also worked as a Visiting Research Associate at the Robotic Perception and Interaction Unit of Athena RC in Greece, investigating smart-home speech technologies and human-robot interfaces for children and elderly people. From 2013 to 2015 he was the Principal Investigator of a research project funded by Onassis Foundation on the adaptation of these technologies for the assistance of paraplegic patients. He has published more than 60 papers (cited more than 1700 times, source: Google Scholar) in international peer-reviewed journals and conferences in the areas of multimodal speech processing, human behavior analysis, affective computing, speech production modeling, speech recognition, acoustic event detection, sign language recognition, and multimodal fusion. Nassos is also an entrepreneur, having co-founded beenotes, and as its CTO, he built the first speech-enabled apiculture data collection and analysis platform in 2016.