大工至善|大学至真分享 http://blog.sciencenet.cn/u/lcj2212916

博文

[转载]【信息技术】【2009.11】自动情感识别:声学和韵律参数的研究

已有 1016 次阅读 2020-8-23 16:44 |系统分类:科研笔记|文章来源:转载

本文为澳大利亚新南威尔士大学(作者:Vidhyasaharan Sethu)的博士论文,共186页。

 

实现具有人与人之间通信自然性的人机语音通信的一个重要步骤是开发一种能够基于语音识别情感的机器。本文利用声学和韵律信息对这一问题进行了研究。在特征层次上,提出了新的群时延和加权频率特征。群延迟特征被显示为强调与共振峰带宽相关的信息,并且被显示为情绪表示。基于最近引入的经验模态分解,提出将加权频率特征作为谱能量分布的一种紧凑表示,并证明其优于其他能量分布估计。特征级比较表明,详细的频谱测量非常能反映情绪,同时表现出更大的说话人差异性。此外,研究还表明,所有特征都是说话人的表征,在多说话人情况下使用这些特征之前,需要进行某种标准化。提出了一种新的说话人特征可变性归一化方法,该方法显著提高了基于不同说话人数据训练和测试的系统性能。这项技术也被用来研究不同特征中特定于说话人的变异量。语音变异性的初步研究表明,特定音位的特征不受情感模型的影响,说话人的变异性在所研究的情境中是一个更重要的问题。最后,分析了一种考虑语音参数时间变化的情感建模方法。在传统的信源滤波模型的基础上,引入了声门频谱的显式模型,并利用该模型的参数来表征语音信号。一个自动情感识别系统应考虑到这些参数随时间变化的轮廓形状,才能显示出优于一个只建立参数分布模型的系统。这一新方法也被经验证明与人类情感分类的表现不相上下。

 

An essential step to achievinghuman-machine speech communication with the naturalness of communicationbetween humans is developing a machine that is capable of recognising emotionsbased on speech. This thesis presents research addressing this problem, bymaking use of acoustic and prosodic information. At a feature level, novelgroup delay and weighted frequency features are proposed. The group delayfeatures are shown to emphasise information pertaining to formant bandwidthsand are shown to be indicative of emotions. The weighted frequency feature,based on the recently introduced empirical mode decomposition, is proposed as acompact representation of the spectral energy distribution and is shown tooutperform other estimates of energy distribution. Feature level comparisonssuggest that detailed spectral measures are very indicative of emotions whileexhibiting greater speaker specificity. Moreover, it is shown that all featuresare characteristic of the speaker and require some of sort of normalisationprior to use in a multi-speaker situation. A novel technique for normalisingspeaker-specific variability in features is proposed, which leads tosignificant improvements in the performances of systems trained and tested ondata from different speakers. This technique is also used to investigate theamount of speaker-specific variability in different features. A preliminarystudy of phonetic variability suggests that phoneme specific traits are notmodelled by the emotion models and that speaker variability is a moresignificant problem in the investigated setup. Finally, a novel approach toemotion modelling that takes into account temporal variations of speechparameters is analysed. An explicit model of the glottal spectrum isincorporated into the framework of the traditional source-filter model, and theparameters of this combined model are used to characterise speech signals. Anautomatic emotion recognition system that takes into account the shape of thecontours of these parameters as they vary with time is shown to outperform asystem that models only the parameter distributions. The novel approach is alsoempirically shown to be on par with human emotion classification performance.

 

 

 

1. 引言

2. 语音与情感

3. 语音特征

4. 说话人可变性

5. 静态分类方法

6. 情感识别的语音参数化

7. 结论与未来工作展望


更多精彩文章请关注公众号:205328s611i1aqxbbgxv19.jpg




https://wap.sciencenet.cn/blog-69686-1247562.html

上一篇:[转载]【计算机科学】【2013.09】激光雷达点云与立体图像点云的融合
下一篇:[转载]【计算机科学】【2019.06】体积计算流体力学的几何深度学习
收藏 IP: 114.102.184.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-4-29 21:08

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部