top of page

Publications

2013

[1] S. Pammi and M. Chetaouni, “Detection of social signals and emotions using adaptation of segmental HMMs,” (Submitted) in Proceedings of INTERSPEECH 2013, Lyon, France, 2013.
 

[2] S. Pammi, H. Khemiri, D. Petrovska-Delacretaz, and G. Chollet, “Detection of nonlinguistic vocalizations using ALISP sequencing,” in Proc. ICASSP 2013, (Vancouver, Canada), 2013.
 

[3] R. Niewiadomski, J. Hofmann, J. Urbain, T. Platt, J. Wagner, B. Piot, H. Cakmak, S. Pammi, T. Baur, S. Dupont,

M. Geist, F. Lingenfelser, G. McKeown, O. Pietquin, and W. Ruch, “Laugh-aware virtual agent and its impact on user amusement,” (in press) in Proc. Autonomous Agents and Multiagent Systems (AAMAS2013), 2013.

 

2012

[4] S. Pammi, "Synthesis of Listener Vocalizations – Towards Interactive Speech Synthesis". PhD thesis, Naturwissenschaftlich-Technische Fakultät I, Universität des Saarlandes, Saarbrücken, Germany, 2012.

[5] M. Schröder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M. Valstar, and M. Wollmer, “Building autonomous sensitive artificial listeners,” Affective Computing, IEEE Transactions on, vol. 3, pp. 165 –183, april-june 2012.
 

[6] B. Qu, S. Pammi, R. Niewiadomski, and G. Chollet, “Estimation of faps and intensities of aus based on real-time face tracking,” in Proc. the 3rd International Symposium on Facial Analysis and Animation (FAA), (Vienna, Austria), 2012.
 

[7] S. Pammi, H. Khemiri, and G. Chollet, “Laughter detection using ALISP-based N-gram models,” in interdisciplinary workshop on Laughter and other Non-Verbal Vocalisations in Speech, 2012.
 

[8] R. Niewiadomski, S. Pammi, A. Sharma, J. Hofmann, T. Platt, R. Cruz, and B. Qu, “Visual laughter synthesis: Initial approaches,” in interdisciplinary workshop on Laughter and other Non-Verbal Vocalisations in Speech, (Dublin, Ireland), 2012.

 

2011

[9] S. Pammi and M. Schröder, “Evaluating the meaning of synthesized listener vocalizations,” Proc. INTERSPEECH 2011. Florence, Italy, pp. 329–332, 2011.


[10] D. Reidsma, I. de Kok, D. Neiberg, S. Pammi, B. van Straalen, K. Truong, and H. van Welbergen, “Continuous interaction with a virtual human,”  Journal on Multimodal User Interfaces, vol. 4, no. 2, pp. 97–118, 2011.


[11] M. Charfuelan, M. Schröder, and S. Pammi, “Classification of listener linguistic vocalisations in interactive meetings,” in Proc. European Signal Processing Conference (EUSIPCO), (Barcelona, Spain), 2011.
 

[12] S. Pammi, M. Schröder, and M. Charfuelan, “Multidimensional meaning annotation of listener vocalizations for synthesis,” in Proceedings of the Workshop Emotion and Computing, (Berlin, Germany), Springer, 2011.
 

[13] M. Schröder, S. Pammi, H. Gunes, M. Pantic, M. F. Valstar, R. Cowie, G. McKeown, D. Heylen, M. ter Maat, F. Eyben, B. Schuller, M. Wöllmer, E. Bevacqua, C. Pelachaud, and E. de Sevin, “Come and have an emotional workout with sensitive artificial listeners!,” in Proc. Conference on Automatic Face and Gesture Recognition (FG’11), 2011.
 

[14] E. Bevacqua, F. Eyben, D. Heylen, M. ter Maat, S. Pammi, C. Pelachaud, M. Schröder, B. Schuller, E. de Sevin, and M. Wöllmer, “Interacting with emotional virtual agents,” in Proc. INTETAIN’11, pp. 243–245, 2011.
 

[15] M. Schröder, M. Charfuelan, S. Pammi, and I. Steiner, “Open source voice creation toolkit for the mary tts platform,” in Proc. INTERSPEECH’11, pp. 3253–3256, 2011.

2010

[16] E. Bevacqua, S. Pammi, S. Hyniewska, M. Schröder, and C. Pelachaud, “Multimodal backchannels for embodied conversational agents,” in Proc. Intelligent Virtual Agents, (Philadelphia, USA), pp. 194–200, Springer, 2010.
 

[17] S. Pammi, M. Schröder, M. Charfuelan, O. Türk, and I. Steiner, “Synthesis of listener vocalisations with imposed intonation contours,” in Proc. Seventh ISCA Tutorial and Research Workshop on Speech Synthesis, (Kyoto, Japan), 2010.

[18] D. Reidsma, K. P. Truong, van Welbergen, D. H., Neiberg, S. Pammi, I. de Kok, and B. van Straalen, “Continuous interaction with a virtual human,” in Proceedings of eNTERFACE’10 workshop, (Amsterdam, the Netherlands), 2010.


[19] E. de Sevin, E. Bevacqua, S. Pammi, C. Pelachaud, M. Schröder, and B. Schuller, “A multimodal listener behaviour driven by audio input,” in Proceedings International Workshop on Interacting with ECAs as Virtual Characters, satellite of AAMAS, 2010.
 

[20] S. Pammi, M. Charfuelan, and M. Schröder, “Multilingual voice creation toolkit for the MARYTTS platform,” Proc. International Conference on Language Resources and Evaluation (LREC), Valettea, Malta: ELRA, 2010.
 

[21] M. Schröder, S. Pammi, R. Cowie, G. McKeown, H. Gunes, M. Pantic, M. Valstar, D. Heylen, M. ter Maat, F. Eyben, et al., “Demo: Have a chat with sensitive artificial listeners,” in Proc. AISB’2010 Symposium ”Towards a Comprehensive Turing Test”, (Leicester, UK), 2010.

 

 

2009

[22] S. Pammi and M. Schröder, “Annotating meaning of listener vocalizations for speech synthesis,” in Affective Computing and Intelligent Interaction (ACII) 2009, September, 2009.


[23] S. Pammi, “Synthesis of nonverbal listener vocalizations,” in Doctoral Consortium at Affective Computing and Intelligent Interaction (ACII) 2009, pp. 57–64, 2009.
 

[24] S. Pammi and M. Schröder, “A corpus based analysis of backchannel vocalizations,” in Interdisciplinary Workshop on Laughter and other Interactional Vocalisations in Speech, (Berlin, Germany), 2009.
 

[25] M. Schröder, E. Bevacqua, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, “A demonstration of audiovisual sensitive artificial listeners,” in 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. IEEE, 2009.
 

[26] S. Pammi, M. Charfuelan, and M. Schröder, “Quality control of automatic labelling using HMM-based synthesis,” in IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2009., pp. 4277 –4280, April 2009.
 

[27] M. Schröder, S. Pammi, and O. Türk, “Multilingual MARY TTS participation in the blizzard challenge 2009,” in Proc. Blizzard Challenge, vol. 9, 2009.

2008

[28] P. Gebhard, M. Schröder, M. Charfuelan, C. Endres, M. Kipp, S. Pammi, M. Rumpler, and O. Türk, “IDEAS4Games: Building expressive virtual characters for computer games,” in Proceedings of the 8th international conference on Intelligent Virtual Agents, IVA ’08, (Berlin, Heidelberg), pp. 426–440, Springer-Verlag, 2008.
 

[29] M. Schröder, M. Charfuelan, S. Pammi, and O. Türk, The MARY TTS entry in the blizzard challenge 2008,” in Proc. Blizzard Challenge, 2008.
 

[30] T. Sarkar, S. Joshi, S. C. Pammi, and K. Prahallad, “LTS using decision forest of regression trees and neural networks,” in Proceedings of INTERSPEECH 2008, (Brisbane, Australia), pp. 1885–1888, 2008.
 

[31] M. Schröder, P. Gebhard, M. Charfuelan, C. Endres, M. Kipp, S. Pammi, M. Rumpler, and O. Türk, “Enhancing animated agents in an instrumented poker game,” in Proceedings of the 31st annual German conference on Advances in Artificial Intelligence, KI ’08, (Berlin, Heidelberg), pp. 316–323, Springer-Verlag, 2008.

 

2007

[32] A. A. Raj, T. Sarkar, S. C. Pammi, S. Yuvaraj, M. Bansal, K. Prahallad, and A. W. Black, “Text processing for text to speech systems in Indian languages,” in Proceedings of 6th ISCA Speech Synthesis Workshop SSW6, (Bonn, Germany), pp. 188–193, 2007.
 

[33] S. Pammi and K. Prahallad, “POS tagging and chunking using decision forests,” in Proceedings of Workshop on Shallow Parsing in South Asian Languages at IJCAI, 2007.
 

[34] V. Keri, S. Pammi, and K. Prahallad, “Pause prediction from lexical and syntax information,” in International Conference on Natural Language Processing (ICON), (Hyderabad, India), pp. 45–49, 2007.

 

 

 

bottom of page