4 Comments
User's avatar
Roch Smith, Jr's avatar

I disagree that there have been "no actual harms." Writers, for example, entered into publishing contracts that prescribed certain conditions for the payment they received. Their compensation did not include having their works used by commercial entities for the purpose of training AI models. Same for other creators. It's not a restriction on anybody's rights to suggest that creators be compensated for the benefits for the value their works add to the commercial AI endeavors. Bing Chat says as much below.

Furthermore, Bing Chat informs me that the US General Accounting office and National Institute of Standards and Technology have promulgated ideas for the governance of AI, some of which may indeed sound a little premature and possibly unnecessary, but others sound quite reasonable, such as requirements that AI be used ethically and transparently.

I'm inspired to try a little thought experiment: On the eve of an election, a video surfaces showing a candidate barbecuing puppies. Despite the candidate's protestations that the video is fake, other videos of the barbecue's attendees surface attesting to the "facts." News networks even book remote interviews with those witnesses who reaffirm the cruelty. After the candidate looses, we learn that the original videos and the subsequent testimonies were all AI generated. Under current law, does the responsibly party get prosecuted? I'm not sure they do. Shouldn't they?

Expand full comment
Roch Smith, Jr's avatar

Bing has some thoughts:

While it is true that copying is an essential part of human creativity, it is important to recognize that AI is not just another tool for human creativity. It is a technology that can learn and make decisions on its own. As such, it is necessary to have rules and regulations in place to ensure that AI is used ethically and responsibly.

AI has the potential to revolutionize many aspects of our lives, from healthcare to transportation. However, it also poses significant risks if not used properly. For example, AI algorithms can perpetuate bias and discrimination if they are not designed and trained properly. They can also be used for malicious purposes, such as deepfakes or cyberattacks.

To prevent these risks, it is important to have clear rules and regulations in place for the development and use of AI. This includes guidelines for data privacy and security, as well as ethical considerations such as transparency and accountability. Governments around the world are already working on developing these regulations.

In addition to these considerations, it is also important to respect copyrighted works when developing AI. Just as with any other creative work, it is important to obtain permission from the original creator before using their work in an AI system. This helps to ensure that creators are fairly compensated for their work and that their intellectual property rights are protected.

Expand full comment
Roch Smith, Jr's avatar

You are a very good writer.

It doesn't take much exposure to AI to become enthusiastic about it. Truthfully though, it also isn't long before we begin to see some -- let's call them challenges. Unpaid use of copyrighted material to train AI models comes to mind. Are you convinced a hands-off approach to AI is the way to go, or can you imagine the need from some guardrails, whether regulatory, commercial or societal. It would be interesting to hear what, if any, constraints you think might be beneficial.

Expand full comment
Paul Henry Smith's avatar

All the proposed restraints so far are a priori restrictions on your fundamental right to free expression. A months-long media campaign ginning up fear of AI (and no actual harms) is making that go down easy. But I don’t think we should accept it. We have all the guardrails and rights we need already in existing law.

Expand full comment