Yoshiko ARIMOTO's Lab from Chiba Institute of Technology
Publications
学術論文
Y. Arimoto, “Phonetic analysis on speech-laugh occurrence in a spontaneous gaming dialog,” Acoustical Science and Technology, vol.44, no.1, pp.36-39, 2023. [PDF]
Y. Arimoto and K. Okanoya, “Multimodal features for automatic emotion estimation during face-to-face conversation,” Journal of Phonetic Society of Japan, vol. 19, no. 1, pp. 53-67, 2015. [PDF]
Y. Arimoto and K. Okanoya, “Mutual emotional understanding in a face-to-face communication environment: How speakers understand and react to listeners’ emotion in a game task dialog,” Acoustical Science and Technology, vol. 36, no. 4, pp. 370-373, 2015. [PDF]
Y. Arimoto, H. Kawatsu, S. Ohno, and H. Iida, “Naturalistic emotional speech collection paradigm with online game and its psychological and acoustical assessment,” Acoustical Science and Technology, vol. 33, no. 6, pp. 359-369, 2012. [PDF]
Y. Arimoto, S. Ohno, and H. Iida, “Assessment of spontaneous emotional speech database toward emotion recognition: Intensity and similarity of perceived emotion from spontaneously expressed emotional speech,” Acoustical Science and Technology, vol. 32, no. 1, pp. 26-29, 2011. [PDF]
Y. Arimoto, “Challenges of Building an Authentic Emotional Speech Corpus of Spontaneous Japanese Dialog,” in special speech session in The International Conference on Language Resources and Evaluation (LREC2018), pp. 6-13, 2018. [PDF]
M. Fukuda and Y. Arimoto, “Physiological Study on the Effect of Game Events in Response to Player’s Laughter,” in Proceedings of Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2022 (APSIPA ASC 2022), pp.1961-1969, doi: 10.23919/APSIPAASC55919.2022.9979868, 2022.
T. Matsuda and Y. Arimoto, “Acoustic discriminability of unconscious laughter and scream during game-play,” in Proceedings of Speech Prosody 2022, pp. 575-579, doi: 10.21437/SpeechProsody.2022-117, 2022.
H. Mori, T. Nagata, and Y. Arimoto, “Conversational and Social Laughter Synthesis with WaveNet,” in Proceedings of Interspeech 2019, pp. 520-523, doi: 10.21437/Interspeech.2019-2131, 2019. [PDF]
Y. Arimoto,Y. Horiuchi, and S. Ohno, “Consistency of base frequency labelling for the F0 contour generation model using expressive emotional speech corpora,” in Proceedings of Speech Prosody 2018, pp. 398-402, 2018. [PDF]
Y. Arimoto, “Challenges of Building an Authentic Emotional Speech Corpus of Spontaneous Japanese Dialog,” in special speech session in The International Conference on Language Resources and Evaluation (LREC2018), pp. 6-13, 2018. [PDF]
Y. Arimoto and H. Mori, “Emotion category mapping to emotional space by cross-corpus emotion labeling,” in Proceedings of Interspeech 2017, pp. 3276-3280, 2017. [PDF]
Y. Arimoto and K. Okanoya, “Comparison of Emotional Understanding in Modality-controlled Environments using Multimodal Online Emotional Communication Corpus,” in Proceedings of The International Conference on Language Resources and Evaluation (LREC2016), pp. 2162-2167, 2016. [PDF]
H. Mori and Y. Arimoto, “Accuracy of Automatic Cross-Corpus Emotion Labeling for Conversational Speech Corpus Commonization,” in Proceedings of The International Conference on Language Resources and Evaluation (LREC2016), pp. 4019-4023, 2016. [PDF]
Y. Arimoto and K. Okanoya, “Emotional synchrony and covariation of behavioral/physiological reactions between interlocutors,” in Proceedings of the 17th Oriental COCOSDA (International Committee for the Co-ordination and Standardization of Speech Databases and Assessment Techniques) Conference 2014, pp. 100-105, 2014. [PDF]
Y. Arimoto and K. Okanoya, “Individual differences of emotional expression in speaker’s behavioral and autonomic responses,” in Proceedings of Interspeech 2013, pp. 1011-1015, 2013. [PDF]
Y. Arimoto and K. Okanoya, “Dimensional Modeling of Perceptual Difference of Multimodal Emotion Perception,” in Proceedings of the Workshops of the 9th international conference on the Evolution of Language, pp. 45-46, 2012.
Y. Arimoto and K. Okanoya, “Dimensional Mapping of Multimodal Integration on Audiovisual Emotion Perception,” in Proceedings of the international conference on Audio-Visual Speech Processing 2011, pp. 89-94, 2011. [PDF]
Y. Arimoto, H. Kawatsu, S. Ohno, and H. Iida, “Emotion recognition in spontaneous emotional speech for anonymity-protected voice chat systems,” in Proceedings of Interspeech 2008, pp. 322-325, 2008. [PDF]
Y. Arimoto, S. Ohno, and H. Iida, “Study on voice quality parameters for anger degree estimation,” 2702, in Proceedings of Acoustics08, 2008.
Y. Arimoto, S. Ohno, and H. Iida, “Automatic Emotional Degree Labeling for Speakers’ Anger Utterance during Natural Japanese Dialog,” in Proceedings of The International Conference on Language Resources and Evaluation (LREC2008), pp. 2279–2284, 2008. [PDF]
K. Murata, M. Enomoto, Y. Arimoto, and Y. Nakano, “When Should Animated Agents Give Additional Instructions to Users? – Monitoring user’s understanding in multimodal dialogues -,” in Proceedings of International Conference on Control, Automation and Systems (ICCAS 2007), pp. 733-736, 2007.
Y. I. Nakano, K. Murata, M. Enomoto, Y. Arimoto, Y. Asa, and H. Sagawa, “Predicting Evidence of Understanding by Monitoring User’s Task Manipulation in Multimodal Conversations,” in Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL2007) Demo and Poster Sessions, pp 121-124, 2007.
Y. Arimoto, S. Ohno, and H. Iida, “Acoustic Features of Anger Utterances during Natural Dialog,” in Proceedings of Interspeech2007, pp.2217-2220, 2007. [PDF]
Y. Arimoto, S. Ohno, and H. Iida, “Emotion Labeling for Automatic Estimation of Speakers’ Anger Emotion Degree,” in Proceedings of Oriental COCOSDA (The International Committee for the Co-ordination and Standardization of Speech Databases and Assessment Techniques) Workshop 2006, 19_jp, pp.48-51, 2006.
Y. Arimoto, S. Ohno, and H. Iida, “A method for estimating the degree of a speaker’s anger using acoustic features and linguistic representation,” in Proceedings of Fourth Joint Meeting of ASA and ASJ, 1pSC44, 2006.
H. Iida, and Y. Arimoto, “One-word Utterance Pragmatics and Emotional Speech Intention,” in Proceedings of International Conference on Multidisciplinay Information Sciences and Technologies (InSciT2006), Vol. II, pp.142-146, Merida, Spain, 2006.
Y. Arimoto, S. Ohno, and H. Iida, “A Method for Discriminating Anger Utterances from Other Utterances using Suitable Acoustic Features,” in Proceedings of International Conference on Speech and Computer (SPECOM2005), pp.613-616, 2005.
Y. Arimoto, S. Ohno, and H. Iida, “Emotion labeling for automatic estimation of the speakers’ angry intensity,” in Computer Processing of Asian Spoken Languages, S. Itahashi and C. Tseng, Eds. Los Angeles: Americas Group Publications, 2010, Chapter 6, Section 2, Part 4, pp. 292–295. (ISBN 978-0-935047-72-1)