由MIR参与合作举办的学术会议AIART2025将于2025年7月4日 (与ICME2025同期) 在法国召开,欢迎关注&参会!
The 7th IEEE Workshop on
Artificial Intelligence for Art Creation
(AIART 2025)
Introduction
Recent advances brought by AI-Generated Content (AIGC) have been an innovative engine for digital content generation, drawing more and more attention from both academia and industry. Across creative fields, AI has sparked new genres and experimentations in painting, music, film, storytelling, fashion and design. Researchers explore the concept of co-creation with AI systems as well as the ethical implications of AI generated images and texts. AI has been applied to art historical research and media studies. The aesthetic value of AI generated content and AI’s impact on art appreciation have also been a contended subject in recent scholarship. AI has not only exhibited creative potential, but also stimulated research from diverse perspectives of neuroscience, cognitive science, psychology, literature, art history, media and communication studies. Despite all these promising features of AI for Art, we still have to face the many challenges such as the biases in AI models, lack of transparency and explainability in algorithms, and copyright issues of training data and AI art works.
This is the 7th AIART workshop to be held in conjunction with ICME 2025 in Nantes, France, and it aims to bring forward cutting-edge technologies and most recent advances in the area of AI art as well as perspectives from neuroscience, cognitive science, psychology, literature, art history, media and communication studies. The theme topic of AIART 2025: AI and Human Co-creativity.
Partner: Machine Intelligence Research
Machine Intelligence Research (IF:8.7, JCR Q1), published by Springer, and sponsored by Institute of Automation, Chinese Academy of Sciences, is formally released in 2022. The journal publishes high-quality papers on original theoretical and experimental research, targets special issues on emerging topics and specific subjects, and strives to bridge the gap between theoretical research and practical applications. The journal has been indexed by ESCI, EI, Scopus, CSCD, etc.
MIR official websites:
https://www.springer.com/journal/11633
MIR Editor-in-Chief:
Tan Tieniu, Nanjing University & Chinese Academy of Sciences
MIR Associate Editors-in-Chief
Yike Guo, Hong Kong University of Science and Technology, China
Brian C. Lovell, The University of Queensland, Australia
Danilo P. Mandic, Imperial College London, UK
Liang Wang, Chinese Academy of Sciences, China
Keynote 1
Speaker: Changwen Chen
Title: Aesthetics Reasoning based on Multimodal LLM
Time: 9:10-9:40, July 4, 2025
Abstract:
The rapid progress of generative art has democratized the creation of visually pleasing imagery. However, achieving genuine artistic impact, a nature that can resonate with viewers on a deeper, more meaningful level, requires a sophisticated aesthetic sensibility. This sensibility involves a multi-faceted reasoning process that extends beyond simple visual appeal, has often been overlooked by current computational models. This talk presents an initial endeavor to capture such a complex process by investigating how the reasoning capability of Multimodal LLMs (MLLMs) can be effectively elicited for aesthetic judgment. Our recent research reveals a critical challenge: MLLMs exhibit a tendency towards hallucinations during aesthetic reasoning, characterized by subjective opinions and unsubstantiated artistic interpretations. We shall demonstrate that these limitations can be overcome by employing an evidence-based, objective reasoning process, as substantiated by the proposed baseline algorithm, ArtCoT. MLLMs prompted by this principle produce multi-faceted and in-depth aesthetic reasoning that aligns significantly better with human judgment. These findings have direct applications in areas such as AI art tutoring and as reward models for generative art. We hope the proposed aesthetics reasoning framework can ultimately pave the way for constructing AI systems that can truly understand, appreciate, and contribute to artistic pieces just like the human aesthetic judgment.
Biography:
Chang Wen Chen is currently Chair Professor of Visual Computing at The Hong Kong Polytechnic University. Before his current position, he served as Dean of the School of Science and Engineering at The Chinese University of Hong Kong, Shenzhen, from 2017 to 2020, and concurrently as Deputy Director at Peng Cheng Laboratory from 2018 to 2021. Previously, he was an Empire Innovation Professor at the State University of New York at Buffalo (SUNY) from 2008 to 2021 and the Allan Henry Endowed Chair Professor at the Florida Institute of Technology from 2003 to 2007. He received his BS degree from the University of Science and Technology of China in 1983, an MS degree from the University of Southern California in 1986, and his PhD degree from the University of Illinois at Urbana-Champaign (UIUC) in 1992.
He has served as Editor-in-Chief for IEEE Trans. Multimedia (2014-2016) and for IEEE Trans. Circuits and Systems for Video Technology (2006-2009). He has received many professional achievement awards, including ten (10) Best Paper Awards or Best Student Paper Awards, the prestigious Alexander von Humboldt Award in 2010, the SUNY Chancellor’s Award for Excellence in Scholarship and Creative Activities in 2016, the UIUC ECE Distinguished Alumni Award in 2019, and the ACM SIGMM Outstanding Technical Achievement Award in 2024. He is an IEEE Fellow, a SPIE Fellow, and a Member of Academia Europaea.
Keynote 2
Speaker: Xinyuan Cai
Title: After the Dawn: Challenges and Strategies for AI-Driven Creative Education
Time: 10:40-11:10, July 4, 2025
Abstract:
With the rapid advancement of artificial intelligence technologies, creative education now stands at the threshold of a profound transformation. After the Dawn: Challenges and Strategies for AI-Driven Creative Education explores the deep impacts of AI on art and design education, while uncovering the underlying challenges and potential pathways for reform. This is not merely a response to technological intervention in teaching, but a redefinition of the very concept of “creativity.”
This talk unfolds across three dimensions. First, it reviews the practical applications of AI in artistic creation, design tools, and educational platforms in the current era, revealing its advantages in improving efficiency and expanding the boundaries of creative thinking. Taking large language models such as DeepSeek and ARTI Designer as examples, we observe how these tools are capable of generating stylistically distinct and logically coherent textual and visual outputs, becoming “new collaborators” in the creative process. Second, the lecture delves into the structural issues AI brings to creative education—such as the weakening of students’ originality, increased technological dependency, shifts in the role of educators, and imbalances in assessment systems. Meanwhile, we must also confront the cognitive biases and semantic inaccuracies exhibited by models like DeepSeek in educational contexts, underscoring the urgent need for ethical frameworks and critical literacy. Finally, the lecture proposes a set of systematic strategies and recommendations—ranging from curriculum redesign, AI ethics education, interdisciplinary collaboration mechanisms, to re-training programs for educators—to help higher education institutions build a more open, flexible, and sustainable AI-driven creative education system.
“After the Dawn” symbolizes not only the illumination brought by technology, but also the clarity and choices faced by educators and learners in the age of intelligence. This talk aims to inspire deep reflection on the future direction of creative education and to promote the formation of an innovative educational paradigm that embraces technology while upholding humanistic values.
Biography:
Dr. Cai Xinyuan, Ph.D. Supervisor, Professor, Nationally Distinguished Expert. He is the Dean of the Design School at Huazhong University of Science and Technology, the Director of the Key Laboratory of Lighting Interactive Service and Technology, Ministry of Culture and Tourism, the Director of the Hubei Provincial Engineering Research Centre of Digital Light and Shadow Technology, the Chairman of ARTI Collaborative Platform of AI Art Education CHINA, and a member of the Teaching Steering Committee of Animation and Digital Media Major of the Ministry of Education. Dr. Cai has long been engaged in teaching and research in the fields of digital media art theory and education, digital light and shadow art environment and landscape, artificial intelligence art and design. He has presided over more than 20 national key projects. He has successively completed many national, provincial and municipal major cultural and technological integration projects, such as the 70th anniversary of the National Day ‘Shining Hubei’ colourful car, Wuhan Yangtze River light show, and so on. He led the construction of ‘ARTI designer XL’ ARTI Art Design Supercomputing Platform to promote the development of artificial intelligence art education. His research and practice have promoted the deep integration of art, science and technology and culture, and played an important leading role in the field of design innovation in the era of intelligence.
Keynote 3
Speaker: Jing Dong
Title: Reflections and Outlook on Generative Artificial Intelligence ——From the Security and Ethics Perspective
Time: 13:30-14:00, July 4, 2025
Abstract:
Generative Artificial Intelligence (Generative AI), powered by Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs), is rapidly reshaping the landscape of artificial intelligence. State-of-the-art models such as GPT-4, DeepSeek, Claude, and DALL-E 3 demonstrate significant progress in generative capabilities, enabling breakthroughs in creative content synthesis, logical inference, automated decision-making, and domain-specific applications. However, the accelerated deployment of these systems has also exposed critical security vulnerabilities and ethical concerns, including the risks of misuse, deepfakes, phishing scams, data privacy breaches, and model security. It has raised unexpected concerns from individuals, organizations, communities and even nations. As Generative AI continues to evolve and integrate into various applications and sectors, the need for robust mechanisms to ensure the safety, trustworthiness, and ethical use of generative models has become increasingly urgent. This talk will focus on security and ethics threats, state-of-the-art solutions and future challenges of the generative AI , especially for visual information.
Biography:
Dr. Jing Dong is currently a Full Professor/Researcher in the National Laboratory of Pattern Recognition in the Institute of Automation, Chinese Academy of Sciences (CASIA). She is a senior member of IEEE/CCF/CSIG. Her research interests include pattern recognition, image processing and image forensics. She has published more than 100 academic papers and chaired in many major national scientific projects and played a leading role in several national and international technical conferences.
She served as the IEEE Biometric Council Beijing Chapter Chair since 2019 and the IEEE R10 ExCom member since 2017. She was also the IEEE SPS Membership Development Director from 2022 to 2024.
She was awarded with the IBM Faculty Award (2016) and the ICPR Best Scientific Paper Award (2018), the CAAI Outstanding Individual Member Award (2019), the CSIG Outstanding Female Young Scientist Award (2020) , the CSIG Science and Technology Award (2021) 、the Wu Wenjun Artificial Intelligence Science and Technology Award (2021) and the CAI Innovation Award (1st Prize,2022) for her excellent contribution for the technical innovation and leadership for the community.
Keynote 4
Speaker: Haonan Cheng
Title: Audio Computing in Multimedia Intelligence: Methods, Applications, and Prospects
Time: 14:50-15:20, July 4, 2025
Abstract:
Audio computing technology in multimedia intelligence, as an intersection of artificial intelligence, physical acoustics and art, is undergoing a paradigm shift from traditional signal analysis to deep semantic understanding. In recent years, the boundaries of generative AI combined with audio continue to expand. The audio computing is reshaping the sensory dimensions of human-computer interaction from traditional audio recording to personalized music creation, and from ambient sound simulation to cross-modal content generation, which poses new security risks. This report focuses on the key methods, typical applications, and future trends of audio computing, with an emphasis on its critical role in intelligent creation, immersive experience, and content security.
Biography:
Haonan Cheng, associate researcher of the State Key Laboratory of Media Convergence and Communication, Communication University of China, mainly focuses on audio information processing, audio-visual cross modal generation and forgery detection.
She became the first technical expert in China to be awarded the Asia-Pacific Young Engineer Prize by ABU in 2024, and was selected for the Beijing National Governance and Young Talent Cultivation Program in 2025. In recent years, she has published more than 40 SCI/EI papers in IEEE TOG, TIFS, TASLP, SIGGRAPH, IEEE VR, IJCAI, AAAI, ACM MM, etc. She has been authorized 2 national invention patents, and won the Excellent Paper Award in the 5th CSIG China Media Forensics and Security Conference, and Best Poster Paper Award in the 20th International Forum on Digital Multimedia Communications. She was funded by more than 10 projects, including National Natural Science Foundation of China, National Key R&D Program, National Social Science Foundation of China, and Medium and Long-term Science and Technology Program for Radio, Television and Audiovisual Network, etc. She serves as a member of the Multimedia Specialized Committee of the Chinese Society of Image and Graphics, the Program Chair of the International Forum on Digital Multimedia Communications, the Forum Chair of the China Multimedia Conference, and the Session Chair of ACM MM and other international conferences.
Keynote 5
Speaker: Terence Broad
Title: Explaining AI through artistic practice
Time: 16:10-16:40, July 4, 2025
Abstract:
Generative neural networks produce media through a complex fabric of computation, contingent on large scraped datasets, where features and representations get encoded into the weights of unfathomably large data arrays, which in turn are enmeshed through complex chains of computation. The ease and realism through which this generated media is mass-produced and its almost uncanny flawlessness makes it easy to forget the complex computational contingencies that produce it. This talk will show how through the artistic practice of making targeted interventions to inputs, weights, training and inference of generative neural networks, artists are able to make critical works that reveal to us otherwise unseen aspects of these models, where the artworks themselves present new ways of understanding and making sense of these unfathomably complex computational systems.
Biography:
Terence Broad is an artist and researcher working in London. He is a Senior Lecturer at the UAL Creative Computing Institute and has recently completed a PhD at Goldsmiths in generative AI. His art and research have been presented internationally: at conferences and journals such as SIGGRAPH, Leonardo, NeurIPS, and ICCC; and museums such as The Whitney Museum of American Art, Garage Museum of Contemporary Art, Ars Electronica, The Barbican and The Whitechapel Gallery. In 2019 He won the Grand Prize in the ICCV Computer Vision Art Gallery. His work is in the city of Geneva’s contemporary art collection.
Program
TPC Members
Ajay Kapur, California Institute of the Arts, USA
Alan Chamberlain, University of Nottingham, UK
Alexander Lerch, Georgia Institute of Technology, USA
Alexander Pantelyat, Johns Hopkins University, USA
Bahareh Nakisa, Deakin University, Australia
Baoqiang Han, China Conservatory of Music, China
Baoyang Chen, Central Academy of Fine Arts, China
Bing Li, King Abdullah University of Science and Technology, Saudi Arabia
Björn W. Schuller, Imperial College London, UK
Bob Sturm, KTH Royal Institute of Technology, Sweden
Borou Yu, Harvard University, USA
Brian C. Lovell, The University of Queensland, Australia
Carlos Castellanos, Rochester Institute of Technology, USA
Changsheng Xu, Institute of Automation, Chinese Academy of Sciences, China
Chunning Guo, Renmin University, China
Cong Jin, Communication University of China, China
Dong Liu, University of Science and Technology of China, China
Dongmei Jiang, Northwestern Polytechnical University, China
Emma Young, BBC, UK
Gus Xia, New York University Shanghai, China & Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates
Haifeng Li, Harbin Institute of Technology, China
Haipeng Mi, Tsinghua University, China
Han Zhang, University of Chinese Academy of Sciences, China
Hanli Wang, Tongji University, China
Haonan Chen, Communication University of China, China
Honghai Liu, Harbin Institute of Technology, China
Hongxun Yao, Harbin Institute of Technology, China
Jesse Engel, Google, USA
Jiafeng Liu, Central Conservatory of Music, China
Jia Jia, Tsinghua University, China
Jiajian Min, Harvard University, USA
Jian Zhang, Peking University, China
Jian Zhao, China Telecom, China
Jianyu Fan, Microsoft, Canada
Jing Huo, Nanjing University, China
Jing Wang, Beijing Institute of Technology, China
Jingjing Chen, Fudan University, China
Jingting Li, Institute of Psychology of the Chinese Academy of Sciences, China
Jingyuan Yang, Shenzhen University, China
Jinshan Pan, Nanjing University of Science and Technology, China
Joanna Zylinska, King’s College London, UK
John See, Multimedia University, Malaysia
Juan Huang, Johns Hopkins University, USA
Jufeng Yang, Nankai University, China
Junping Zhang, Fudan University, China
Kang Zhang, Hong Kong University of Science and Technology (Guangzhou), China
Kate Crawford, University of Southern California, USA
Ke Lv, University of Chinese Academy of Sciences, China
Kenneth Fields, Central Conservatory of Music, China
Lai-Kuan Wong, Multimedia University, Malaysia
Lamberto Coccioli, Royal Birmingham Conservatoire, UK
Lamtharn Hanoi Hantrakul, ByteDance, USA
Lei Xie, Northwestern Polytechnical University, China
Leida Li, Xidian University, China
Li Liu, Hong Kong University of Science and Technology (Guangzhou), China
Li Song, Shanghai Jiao Tong University, China
Li Zhou, China University of Geosciences (Wuhan), China
Lianli Gao, University of Electronic Science and Technology of China, China
Lin Gan, Tianjin University, China
Long Ye, Communication University of China, China
Maosong Sun, Tsinghua University, China
Mei Han, Ping An Technology Art institute, USA
Mengjie Qi, China Conservatory of Music, China
Mengshi Qi, Beijing University of Posts and Telecommunications, China
Mengyao Zhu, Huawei Technologies Co., Ltd, China
Ming Zhang, Nanjing Art College, China
Mohammad Naim Rastgoo, Queensland University of Technology, Australia
Na Qi, Beijing University of Technology, China
Nancy Katherine Hayles, University of California Los Angeles, USA
Nick Bryan-Kinns, Queen Mary University of London, UK
Nina Kraus, Northwestern University, USA
Pengtao Xie, University of California, San Diego, USA
Pengyun Li, Wuhan Conservatory of Music, China
Philippe Pasquier, Simon Fraser University, Canada
Qi Mao, Communication University of China, China
Qin Jin, Renmin University, China
Qiuqiang Kong, The Chinese University of Hong Kong, China
Rebecca Fiebrink, University of the Arts London, UK
Rick Taube, University of Illinois at Urbana-Champaign, USA
Roger Dannenberg, Carnegie Mellon University, USA
Rongfeng Li, Beijing University of Posts and Telecommunications, China
Rui Wang, Institute of Information Engineering, Chinese Academy of Sciences, China
Ruihua Song, Renmin University, China
Sarah Wolozin, Massachusetts Institute of Technology, USA
Shangfei Wang, University of Science and Technology of China, China
Shasha Mao, Xidian University, China
Shen Li, Henan University, China
Shiguang Shan, Institute of Computing Technology, Chinese Academy of Sciences, China
Shiqi Wang, City University of Hong Kong, China
Shiqing Zhang, Taizhou University, China
Shuai Yang, Peking University, China
Shun Kuremoto, Uchida Yoko Co.,Ltd, Japan
Si Liu, Beihang University, China
Sicheng Zhao, Tsinghua University, China
Simon Colton, Queen Mary University of London, UK
Simon Lui, Huawei Technologies Co., Ltd, China
Siwei Ma, Peking University, China
Steve DiPaola, Simon Fraser University, Canada
Tiange Zhou, NetEase Cloud Music, China
Wei Chen, Zhejiang University, China
Weibei Dou, Tsinghua University, China
Weiming Dong, Institute of Automation, Chinese Academy of Sciences, China
Wei-Ta Chu, Chung Cheng University, Taiwan, China
Wei Li, Fudan University, China
Weiwei Zhang, Dalian Maritime University, China
Wei Zhong, Communication University of China, China
Wen-Huang Cheng, Chiao Tung University, Taiwan, China
Wenli Zhang, Beijing University of Technology, China
Wenming Zheng, Southeast University, China
Xi Shao, Nanjing University of Posts and Telecommunications, China
Xi Yang, Beijing Academy of Artificial Intelligence, China
Xiaohong Liu, Shanghai Jiao Tong University, China
Xiaohua Sun, Tongji University, China
Xiaolin Hu, Tsinghua University, China
Xiaojing Liang, NetEase Cloud Music, China
Xiaopeng Hong, Harbin Institute of Technology, China
Xiaoyan Sun, University of Science and Technology of China, China
Xiaoying Zhang, China Rehabilitation Research Center, China
Xihong Wu, Peking University, China
Xin Jin, Beijing Electronic Science and Technology Institute, China
Xinfeng Zhang, University of Chinese Academy of Sciences, China
Xinyuan Cai, Huazhong University of Science and Technology, China
Xu Tan, Microsoft Research Asia, China
Ya Li, Beijing University of Posts and Telecommunications, China
Yan Yan, Xiamen University, China
Yanchao Bi, Beijing Normal University, China
Yi Jin, Beijing Jiaotong University, China
Yi Qin, Shanghai Conservatory of Music, China
Ying-Qing Xu, Tsinghua University, China
Yirui Wu, Hohai University, China
Yuan Yao, Beijing Jiaotong University, China
Yuanchun Xu, Xiaoice, China
Yuanyuan Liu, China University of Geosciences (Wuhan), China
Yuanyuan Pu, Yunnan University, China
Yun Wang, Beihang University, China
Zhaoxin Yu, Shangdong University of Arts, China
Zheng Lian, Institute of Automation of the Chinese Academy of Sciences, China
Zhi Jin, Sun Yat-Sen University, China
Zhiyao Duan, University of Rochester, USA
Zichun Guo, Beijing University of Chemical Technology, China
Zijin Li, Central Conservatory of Music, China
Organizers
Luntian Mou
Beijing University of Technology, China
ltmou@bjut.edu.cn
Dr. Luntian Mou is an Associate Professor with the School of Information Science and Technology, Beijing University of Technology, and also with Beijing Institute of Artificial Intelligence (BIAI). He received the Ph.D. degree in computer science from the University of Chinese Academy of Sciences, China in 2012. He served as a Postdoctoral Fellow at Peking University, from 2012 to 2014. And he was a Visiting Scholar with the University of California, Irvine, from 2019 to 2020. He initiated the IEEE Workshop on Artificial Intelligence for Art Creation (AIART) in 2019, and published a book titled Artificial Intelligence for Art Creation and Understanding in 2024. His current research interests include artificial intelligence, machine learning, multimedia computing, affective computing, and brain-like computing. He is the recipient of Beijing Municipal Science and Technology Advancement Award, and the recipient of China Highway Society Technology Invention Award, IEEE Outstanding Contribution to Standardization Award, and AVS Outstanding Contribution on 15th Anniversary Award. He serves as a Guest Editor for Machine Intelligence Research, and a Reviewer for many important international journals and conferences such as TIP, TAFFC, TMM, TCSVT, TITS, CVPR, AAAI, etc. And he serves as a Co-Chair of System subgroup in AVS workgroup. He is a Senior Member of IEEE, CCF, and CSIG, and a Member of ACM, and CAAI, and an Expert of MPEG China.
Feng Gao
Peking University, China
gaof@pku.edu.cn
Dr. Feng Gao is an Assistant Professor with the School of Arts, Peking University. He has long researched in the disciplinary fields of AI and art, especially in AI painting. He co-initiated the international workshop of AIART. Currently, he is also enthusiastic in virtual human. He has demonstrated his AI painting system, called Daozi, in several workshops and drawn much attention.
Kejun Zhang
Zhejiang University, China
zhangkejun@zju.edu.cn
Dr. Kejun Zhang is a Professor with Zhejiang University, joint PhD supervisor on Design and Computer Science, Dean of Department of Industrial Design at College of Computer Science of Zhejiang University. He received his PhD degree from College of Computer Science and Technology, Zhejiang University in 2010. From 2008 to 2009, He was a visiting research scholar of University of Illinois at Urbana-Champaign, USA. In June 2013, he became a faculty of the College of Computer Science and Technology at Zhejiang University. His current research interests include Affective Computing,Design Science, Artificial Intelligence, Multimedia Computing and the understanding, modelling and innovation design of products and social management by computational means. He is now the PI of National Science Foundation of China, Co-PI of National Key Research and Development Program of China, and PIs of ten more other research programs. He has authored 4 books, more than 40 scientific papers.
Zeyu Wang
Hong Kong University of Science and Technology(Guangzhou), China
zeyuwang@ust.hk
Dr. Zeyu Wang is an Assistant Professor of Computational Media and Arts (CMA) in the Information Hub at the Hong Kong University of Science and Technology (Guangzhou) and an Affiliate Assistant Professor in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology. He received a PhD from the Department of Computer Science at Yale University and a BS from the School of Artificial Intelligence at Peking University. He leads the Creative Intelligence and Synergy (CIS) Lab at HKUST(GZ) to study the intersection of Computer Graphics, Human-Computer Interaction, and Artificial Intelligence, with a focus on algorithms and systems for digital content creation. His current research topics include sketching, VR/AR/XR, and generative techniques, with applications in art, design, perception, and cultural heritage. His work has been recognized by an Adobe Research Fellowship, a Franke Interdisciplinary Research Fellowship, a Best Paper Award, and a Best Demo Honorable Mention Award.
Gerui Wang
Stanford University, USA
zeyuwang@ust.hk
grwang@stanford.edu
Dr. Gerui Wang is a Lecturer at Stanford University Center for East Asian Studies, where she teaches classes on contemporary art, AI and posthumanism. Her research interests span arts, public policy, environment, and emerging technologies. She is a member of the Alan Turing Institute AI&Arts Research Group. With her background in art history, she has published in the Journal of Chinese History and Newsletter for International China Studies. Gerui's book Sustaining Landscapes: Governance and Ecology in Chinese Visual Culture is forthcoming in 2025. Her research briefs on AI, robotics, media, and society are frequently featured in public venues including Forbes, Alan Turing Institute's AI and Art Forum, Asia Times, and South China Morning Post. Gerui holds a doctorate in art history from the University of Michigan.
Ling Fan
Tezign.com
Tongji University Design Artificial Intelligence Lab, China
lfan@tongji.edu.cn
Dr. Ling Fan is a Scholar and Entrepreneur to bridge machine intelligence with creativity. He is the founding chair and professor of Tongji University Design Artificial Intelligence Lab. Before, he held teaching position at the University of California at Berkeley and China Central Academy of Fine Arts. Dr. Fan co-founded Tezign.com, a leading technology start-up with the mission to build digital infrastructure for creative contents. Tezign is backed by top VCs like Sequoia Capital and Hearst Ventures. Dr. Fan is a World Economic Forum Young Global Leader, an Aspen Institute China Fellow, and Youth Committee member at the Future Forum. He is also a member of IEEE Global Council for Extended Intelligence. Dr. Fan received his doctoral degree from Harvard University and master's degree from Princeton University. He recently published From Universality of Computation to the Universality of Imagination, a book on how machine intelligence would influence human creativity.
Nick Bryan-Kinns
University of the Arts London, UK
n.bryankinns@arts.ac.uk
Dr. Nick Bryan-Kinns is a Professor of Creative Computing at the Creative Computing Institute, University of the Arts London. His research explores new approaches to interactive technologies for the Arts and the Creative Industries through Creative Computing. His current focus is on Human-Centered AI and eXplainable AI for the Arts. His research has made audio engineering more accessible and inclusive, championed the design of sustainable and ethical IoT and wearables, and engaged rural and urban communities with physical computing through craft and cultural heritage. Products of his research have been exhibited internationally including Ars Electronica (Austria) the V&A and the Science Museum (UK), made available online and as smartphone apps, used by artists and musicians in performances and art installations, and have been reported in public media outlets including the BBC and New Scientist. He is a Fellow of the Royal Society of Arts, Fellow of the British Computer Society (BCS), and Senior Member of the Association of Computing Machinery (ACM). He is a recipient of the ACM and BCS Recognition of Service Awards, and chaired the ACM Creativity and Cognition conference 2009, and the BCS international HCI conference 2006.
Ambarish Natu
Australian Government, Australia
ambarish.natu@gmail.com
Dr. Ambarish Natu is with the Australian Government. After graduating from University of New South Wales, Sydney, Ambarish has held positions as a visiting researcher in Italy and China, worked for industry in United Kingdom and the United States of America and for the past ten years has been working in the Australian Government. For the past 17 years, Ambarish has led the development of five international standards under the auspices of the International Standards Organization (ISO) popularly known as JPEG (Joint Photographic Experts Group). He is the recipient of the ISO/IEC certificate for contributions to technology standards. Ambarish is highly active in the area of international standardization and voicing Australian concerns in the area of JPEG and MPEG (Motion Pictures Experts Group) standardization. He previously initiated an effort in the area of standardization relating to Privacy and Security in the Multimedia Context both within JPEG and MPEG standard bodies. In 2015, Ambarish was the recipient of the prestigious Neville Thiele Award and the Canberra Professional Engineer of the Year by Engineers Australia. Ambarish currently works as an ICT Specialist for the Australian Government. Ambarish is a Fellow of the Australian Computer Society and Engineers Australia. Ambarish also serves on the IVMSP TC and the Autonomous Systems Initiative of the IEEE Signal Processing Society. Ambarish has also been General Chair of DICTA 2018, ICME 2023 and TENSYMP 2023 in the past. Ambarish has keen interest in next generation data and analytics technologies that will change the course of the way we interact with in the world.
Website
For more information, please visit
关于Machine Intelligence Research
Machine Intelligence Research(简称MIR,原刊名International Journal of Automation and Computing)由中国科学院自动化研究所主办,于2022年正式出版。MIR立足国内、面向全球,着眼于服务国家战略需求,刊发机器智能领域最新原创研究性论文、综述、评论等,全面报道国际机器智能领域的基础理论和前沿创新研究成果,促进国际学术交流与学科发展,服务国家人工智能科技进步。期刊入选"中国科技期刊卓越行动计划",已被ESCI、EI、Scopus、中国科技核心期刊、CSCD等20余家国际数据库收录,入选图像图形领域期刊分级目录-T2级知名期刊。2022年首个CiteScore分值在计算机科学、工程、数学三大领域的八个子方向排名均跻身Q1区,最佳排名挺进Top 4%,2023年CiteScore分值继续跻身Q1区。2024年获得首个影响因子(IF) 6.4,位列人工智能及自动化&控制系统两个领域JCR Q1区;2025年发布的最新影响因子达8.7,继续跻身JCR Q1区,最佳排名进入全球第6名;2025年一举进入中国科学院期刊分区表计算机科学二区。
▼往期目录▼
2025年第2期 | 常识知识获取、图因子分解机、横向联邦学习、分层强化学习...
2025年第1期 | 机器视觉、机器人、神经网络、反事实学习、小样本信息网络...
2024年第6期 | 图神经网络,卷积神经网络,生物识别技术...
2024年第5期 | 大语言模型,无人系统,统一分类与拒识...
2024年第3期 | 分布式深度强化学习,知识图谱,推荐系统,3D视觉,联邦学习...
2024年第2期 | 大语言模型、零信任架构、常识知识推理、肿瘤自动检测和定位...
2023年第6期 | 影像组学、机器学习、图像盲去噪、深度估计...
2023年第5期 | 生成式人工智能系统、智能网联汽车、毫秒级人脸检测器、个性化联邦学习框架... (机器智能研究MIR)
2023年第4期 | 大规模多模态预训练模型、机器翻译、联邦学习......
2023年第3期 | 人机对抗智能、边缘智能、掩码图像重建、强化学习...
2023年第2期 · 特约专题 | 大规模预训练: 数据、模型和微调
2023年第1期 | 类脑智能机器人、联邦学习、视觉-语言预训练、伪装目标检测...
▼好文推荐▼
哈工大江俊君团队 | SCNet:利用全1X1卷积实现轻量图像超分辨率
下载量TOP好文 | 人工智能领域高下载文章集锦(2023-2024年)
上海交大张拳石团队 | 综述: 基于博弈交互理论的神经网络可解释性研究
专题好文 | Luc Van Gool团队: 基于分层注意力的视觉Transformer
澳大利亚国立大学Nick Barnes团队 | 对息肉分割的再思考: 从分布外视角展开
前沿观点 | Segment Anything并非一直完美: SAM模型在不同真实场景中的应用调查
自动化所黄凯奇团队 | 分布式深度强化学习:综述与多玩家多智能体学习工具箱
约翰霍普金斯大学Alan Yuille团队 | 从时序和高维数据中定位肿瘤的弱标注方法
精选综述 | 零信任架构的自动化和编排: 潜在解决方案与挑战
欧洲科学院院士蒋田仔团队 | 脑成像数据的多模态融合: 方法与应用
专题好文 | 创新视听内容的联合创作: 计算机艺术面临的新挑战
哈工大江俊君团队 | DepthFormer: 利用长程关联和局部信息进行精确的单目深度估计
Luc Van Gool团队 | 通过Swin-Conv-UNet和数据合成实现实用图像盲去噪
贺威团队&王耀南院士团队 | 基于动态运动基元的机器人技能学习
乔红院士团队 | 类脑智能机器人:理论分析与系统应用 (机器智能研究MIR)
南科大于仕琪团队 | YuNet:一个速度为毫秒级的人脸检测器
上海交大严骏驰团队 | 综述: 求解布尔可满足性问题(SAT)的机器学习方法
前沿观点 | 谷歌BARD的视觉理解能力如何?对开放挑战的实证研究
港中文黄锦辉团队 | 综述: 任务型对话对话策略学习的强化学习方法
南航张道强教授团队 | 综述:用于脑影像基因组学的机器学习方法
ETHZ团队 | 一种基于深度梯度学习的高效伪装目标检测方法 (机器智能研究MIR)
▼MIR资讯▼
喜报 | MIR 首次入选中国科学院期刊分区表计算机科学类二区
致谢审稿人 | Machine Intelligence Research
专题征稿 | Special Issue on Subtle Visual Computing
转载本文请联系原作者获取授权,同时请注明本文来自陈培颖科学网博客。
链接地址:https://wap.sciencenet.cn/blog-749317-1491372.html?mobile=1
收藏