何毓琦的个人博客分享 http://blog.sciencenet.cn/u/何毓琦 哈佛(1961-2001) 清华(2001-date)

博文

Danger of Chat-GPT and Generative AI

已有 850 次阅读 2024-3-13 22:14 |个人分类:生活点滴|系统分类:海外观察

In one of my earlier blogs, I wrote about my experience with Chat-GPT in which I ask GPT to produce a biography of myself. When I found many untruths in the bio writeup, I questioned the GPT as to where it found these facts. GPT confessed that it made it up

In educational field, when students use GPT to procuce essays as substitute for home work and course requirements, professors face a problem of grading students

Today, our local newspaper, the Boston Globe, reported a more serious problem. A 3/13/2024 article reports a lawyer used GPT to produce a written legal arguement about a case he is defending in court. In the well written legal brief the lawyer cited three previous cases as precedents to support his reasoning (a well known and important law practice). What the lawyer failed to check was the fact that these precedents are nonexistent which the generative AI just made up to support the write up. Fortunately, the judge checked and discover the deception for which the lawyer were disciplined and fined. But do we know how many times such error went undected and results in unjust decisions? 

We live in dangerous times! One cannot believe things that appeared in print or saw in video unless double checked and verified before acting on such information. Yet everyday we are bombarded with unsolicited information overload never mind the social media which many of us are willingly engaged in. 

How does one behave in such environment safely and comfortably?



https://wap.sciencenet.cn/blog-1565-1425216.html

上一篇:A life of the mind
下一篇:You Tube Video
收藏 IP: 96.237.235.*| 热度|

3 许培扬 郑永军 谢钢

该博文允许注册用户评论 请点击登录 评论 (2 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-4-27 15:21

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部