
Abstract: Human creators have provided the works (whether prose writing, source code, or visual works) that act as the basis for training huge DNNs, arguably create an obligation for the AI outputs. This feels most evident when promps ask AIs to create something "in the style of such-and-such-human." While such outputs are often flawed in interesting ways, they are also usually recognizable in their connection to the prompted human creator. What rights should those source humans have to control those uses, including the moral right simply to be formally recognized as the source? Few laws exist governing attribution and moral rights in generative AI, but many will come to exist soon. Laws and technical standards may follow good or bad principles, both ethical and technical.
Bio: David is founder of KDM Training, a partnership dedicated to educating developers and data scientists in machine learning and scientific computing. He created the data science training program for Anaconda Inc. and was a senior trainer for them. With the advent of deep neural networks he has turned to training our robot overlords as well.
He was honored to work for 8 years with D. E. Shaw Research, who have built the world's fastest, highly-specialized (down to the ASICs and network layer), supercomputer for performing molecular dynamics.
David was a Director of the PSF for six years, and remains co-chair of its Trademarks Committee and of its Scientific Python Working Group. His columns, Charming Python and XML Matters, written in the 2000s, were the most widely read articles in the Python world. He has written previous books for Manning, Packt, O'Reilly and Addison-Wesley, and has given keynote addresses at numerous international programming conferences.

David Mertz, Ph.D.
Title
Director of Epistemology | KDM Training
