Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
AI, Liability, and Hallucinations in a Changing Tech and Law Environment
Manage episode 482907544 series 2601021
Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations.
In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.
Links:
- Daniel Ho >>> Stanford Law page
- Stanford Institute for Human-Centered Artificial Intelligence (HAI) >>> Stanford University page
- Regulation, Evaluation, and Governance Lab (RegLab) >>> Stanford University page
Connect:
- Episode Transcripts >>> Stanford Legal Podcast Website
- Stanford Legal Podcast >>> LinkedIn Page
- Rich Ford >>> Twitter/X
- Pam Karlan >>> Stanford Law School Page
- Stanford Law School >>> Twitter/X
- Stanford Lawyer Magazine >>> Twitter/X
(00:00:00) Introduction to AI in Legal Education
(00:05:01) AI Tools in Legal Research and Writing
(00:12:01) Challenges of AI-Generated Content
(00:20:0) Reinforcement Learning with Human Feedback
(00:30:01) Audience Q&A
164 tập
Manage episode 482907544 series 2601021
Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations.
In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.
Links:
- Daniel Ho >>> Stanford Law page
- Stanford Institute for Human-Centered Artificial Intelligence (HAI) >>> Stanford University page
- Regulation, Evaluation, and Governance Lab (RegLab) >>> Stanford University page
Connect:
- Episode Transcripts >>> Stanford Legal Podcast Website
- Stanford Legal Podcast >>> LinkedIn Page
- Rich Ford >>> Twitter/X
- Pam Karlan >>> Stanford Law School Page
- Stanford Law School >>> Twitter/X
- Stanford Lawyer Magazine >>> Twitter/X
(00:00:00) Introduction to AI in Legal Education
(00:05:01) AI Tools in Legal Research and Writing
(00:12:01) Challenges of AI-Generated Content
(00:20:0) Reinforcement Learning with Human Feedback
(00:30:01) Audience Q&A
164 tập
Tất cả các tập
×Chào mừng bạn đến với Player FM!
Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.