IEEEJAS的个人博客分享 http://blog.sciencenet.cn/u/IEEEJAS

博文

明日直播预告‖自动化前沿热点讲堂之第十八讲

已有 1640 次阅读 2022-6-23 08:29 |系统分类:博客资讯

题目:Higher-order learning in games and feedback control

时间:202262410:00-11:00(北京时间)

报告人:Prof. Jeff S. Shamma, University of Illinois at Urbana-Champaign

主持人:Prof. Qing-Long Han, Swinburne University of Technology

Zoom Meeting ID: 892 582 6415

Password: SWIN5858

Zoom link:

https://swinburne.zoom.us/j/8925826415?pwd=Y3VSMmtUWi9sRHR0MnBJVmlxTUcwUT09

 直播海报18.jpg

摘要:

In game theoretic learning, e.g., for matrix games and population games, agents myopically adapt their strategies in reaction to the evolving strategies of other agents in an effort to maximize their own utilities. The resulting interactions can be represented as a dynamical system that maps agent observations to agent strategies. Well-known and widely studied examples of adaptation/learning rules include fictitious play, gradient play, regret minimization, and replicator dynamics.  In these examples, the associated learning rule has an induced dimensionality, or number of states, that is equal to the number of agent actions. As the terminology suggests, “higher-order” learning refers to learning rules that are not restricted in their dimensionality. Such learning rules introduce auxiliary states not included in their lower-order counterparts, while respecting the original informational structure of what is observed and known to each agent.

 

This talk presents on overview of results that illustrate how higher-order learning can induce qualitative changes in long-run outcomes, including convergence to Nash equilibria not possible under lower-order dynamics (including uncoupled dynamics counterexamples and replicator dynamics for zero-sum games). A specific focus will be on higher-order “anticipatory” versions of lower-order learning rules, which appears to parallel optimistic versions of optimization algorithms. The talk concludes with an analysis framework for higher-order learning that exploits an implicit feedback structure in game-theoretic learning, where the learning dynamics are separated from the game specifics. In particular, the talk presents the concept of passivity from feedback control, its application to higher-order learning analysis, and connections to contractive/stable games.

 

报告人简介:

Jeff S. Shamma.jpg

Jeff S. Shamma is with the University of Illinois at Urbana-Champaign where he is the Department Head of Industrial and Enterprise Systems Engineering (ISE) and Jerry S. Dobrovolny Chair in ISE. His prior academic appointments include faculty positions at the King Abdullah University of Science and Technology (KAUST), where he is an Adjunct Professor of Electrical and Computer Engineering, and the Georgia Institute of Technology, where he was the Julian T. Hightower Chair in Systems and Controls. Jeff received a PhD in Systems Science and Engineering from MIT in 1988. He is a Fellow of IEEE and IFAC; a recipient of the IFAC High Impact Paper Award, AACC Donald P. Eckman Award, and NSF Young Investigator Award; and a past Distinguished Lecturer of the IEEE Control Systems Society. He has been a plenary or semi-plenary speaker at several conferences, including NeurIPS, World Congress of the Game Theory Society, IEEE Conference on Decision and Control, and the American Control Conference. Jeff is currently serving as the Editor-in-Chief for the IEEE Transactions on Control of Network Systems.


参会方式

方式一:参与Zoom会议

Zoom Meeting ID:892 582 6415

Password:SWIN5858

方式二:bilibili直播间观看直播

1.jpg

方式三:识别下方二维码观看蔻享直播

蔻享直播18.png




https://wap.sciencenet.cn/blog-3291369-1344128.html

上一篇:从视频到语言: 视频标题生成与描述研究综述
下一篇:基于 GBDT 的铁路事故类型预测及成因分析
收藏 IP: 223.104.3.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-4-26 18:58

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部