Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The human authorship requirement still stands:

> If a work's traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it. [0]

Even with that, applicants now must disclose the inclusion of AI generated content and highlight which parts are human authored vs AI generated:

> Consistent with the Office's policies described above, applicants have a duty to disclose the inclusion of AI-generated content in a work submitted for registration and to provide a brief explanation of the human author's contributions to the work. [1]

[0] https://www.federalregister.gov/d/2023-05321/p-44

[1] https://www.federalregister.gov/d/2023-05321/p-59



If an author chooses not to credit an AI, how are they going to know?

It's already not completely obvious with the current state of the art in at least some domains.

What happens when the tech moves from "Not completely obvious" to "Impossible to tell?"


I've seen Grammarly commercials.

What if I write some paragraphs, and then drag a big Grammarly slider across it, and it's no longer my words, but my ideas are still there, just buffed and touched up professionally?

What's "AI generation" anyway?

I wrote a limerick for a friend last week. Well, I had the idea for it and it was jangling inside my head, but I didn't feel like fleshing it out, so I had the AI write it. I was accused of "cheating". But I'm capable of writing this limerick; I just wanted to see if a computer could put a ribbon on it. And it worked fine. I claim authorship (and copyright) nonetheless.


Watermarking seems like a possible solution: https://www.nytimes.com/interactive/2023/02/17/business/ai-t...

I think that OpenAI et al are incentivized to pursue watermarking. If someone uses GPT to write a best-selling novel or a blockbuster movie script, OpenAI would want a piece of the action.

Similarly, publishers/distributors of creative works are incentivized to use any available detection tools because they don't want to be surprised when someone comes along and says, "Actually, you owe us a boatload of cash for that work."


Right… I hope this isn’t a situation where everyone has to be dishonest in order to compete at the highest level. Major League Baseball and cycling from the early 00s come to mind. I wonder if having correct rules that are unenforceable do more harm than good.


The correct response would be the end of digital intellectual property. We would have a creative explosion akin to the age of free sampling in music, the early internet or the modern Chinese digital landscape. The actual response will be more layers of bureaucracy, which will have the chilling effect of demoralizing small creators and further empowering IP trolls and large corporations with lots of money for this sort of thing.


> If an author chooses not to credit an AI, how are they going to know?

IF a dispute arises, it will be settled in a court of law, with the trier of fact (jury or judge, as may be) applying the civil preponderance of the evidence standard. (Tools for detecting use of generative AI models are being developed, as are systems of including watermarks that are unnoticeable by casual human inspection in the output of such systems.)


Fraud has existed for a long time.

For the most part society is designed with the assumption that most people will tell the truth.

You could create a different kind of society, where the default assumption is everyone lies, but I suspect no one would be able to live under those conditions.

Note this is not the same as taking steps to check for lying, it's just a question of what's the default assumption.


> If an author chooses not to credit an AI, how are they going to know?

USCO doesn't generally proactively investigate (it would take too long), but a copyright claim could be invalidated if it is proven that the applicant didn't disclose required information. As of today, the USCO has explicitly say that AI-assistance needs to be disclosed.


It's stupid to create a rule that's both unenforceable and limits protecting unique creations generated by AI but initiated by humans. Results are all that matter, not process.


“Not going to know” can apply to most laws.

Most people especially enterprises abide.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: