Steam installieren
Anmelden
|
Sprache
简体中文 (Vereinfachtes Chinesisch)
繁體中文 (Traditionelles Chinesisch)
日本語 (Japanisch)
한국어 (Koreanisch)
ไทย (Thai)
Български (Bulgarisch)
Čeština (Tschechisch)
Dansk (Dänisch)
English (Englisch)
Español – España (Spanisch – Spanien)
Español – Latinoamérica (Lateinamerikanisches Spanisch)
Ελληνικά (Griechisch)
Français (Französisch)
Italiano (Italienisch)
Bahasa Indonesia (Indonesisch)
Magyar (Ungarisch)
Nederlands (Niederländisch)
Norsk (Norwegisch)
Polski (Polnisch)
Português – Portugal (Portugiesisch – Portugal)
Português – Brasil (Portugiesisch – Brasilien)
Română (Rumänisch)
Русский (Russisch)
Suomi (Finnisch)
Svenska (Schwedisch)
Türkçe (Türkisch)
Tiếng Việt (Vietnamesisch)
Українська (Ukrainisch)
Ein Übersetzungsproblem melden
Everything is public use that ai makes. Thats great.
https://i.imgur.com/WMsHHxr.jpeg
If someone breaks into your home and shoots you in the head in the USA, then they have committed murder and violated your right to life as outlined in the Bill of Rights, but here's the thing... you're still dead.
Reality does not always align with what the law recognizes.
While I think that there is room to argue against these points that you have made, they ARE really well-crafted points.
The language descriptivist in me at least says that "A.I. artist" has a semi-clear meaning, and while that's probably just prompt-engineering most of the time, the phrase does mean something and so even if the phrase is semantically incorrect in its usage - that's technically what they are just because that's how the phrase is used and we know what it probably actually entails.
I think the argument gets weaker when you generate a series of images and then string them together, even in the most basic example of creating a storyboard with them, because at that point the arrangement of the panels as they relate to a creation of a story is something that you're actually creating that the tool did not make on its own. But even then, your argument still holds up for the individual image generations, especially if they're entirely unedited.
There are arguments to be made against the accuracy of controlling every line, stroke, or detail of traditionally created works, but those only counter that particular statement - they don't counter your overall argument that prompt engineering is equivalent to crafting a request that you have commissioned.
The best parallel between A.I. generation and other image tools are photographs. Depending on how much extra work went into it depends on how much of it was really "created by the artist". When there is not a large amount of staging that goes into making a photo, it is almost offensive to claim that someone created that photo when a machine did all the work and all they did was point their phone at a sunset and press a button or something. *lol* It might be a very good image but there is a strong argument to be made that "they did not create that!"
create the concept of a character by hand , you own the character itself even if your movie was enterly made by A.I.
a human wrote the dialogue , you own the dialogues even if they where voiced by A.I in the movie.
What the law does quite likely recognize as an exception ("quite likely" because it's a case by case basis when copyright disputes are initiated) is if you made edits to the generation after it was generated.
But here's the thing - if you generated it locally - (probably) no one can actually know or prove that you used a generation without making edits. Plenty of real artists have been falsely accused of "generating A.I. slop" by people who are not good content-discriminators, and as the tools get better ...it may become inherently impossible to accurately do any content-discrimination as to whether something is A.I. generated or not. For now there are USUALLY signs... but... even then... is that just a style you don't like or is that maybe a cheap instagram filter of a real photo? Maybe...
Now, we could criminalize A.I. art and put a heavy burden on all artists by throwing out due process for this crime and shifting the burden of proof so that people accused of creating A.I. art HAVE TO show a video of every stroke that they made and show their entire process / show their work. ...of course that doesn't prove that people who can't make those videos didn't make it themselves, it just means that we are assuming that they didn't through a process that goes directly against what fair and just judgement / convictions / handling of accusations are.
Like most anti-A.I. art sentiments, this is something that hobby artists and angry people (who maybe aren't thinking things through) might actually advocate for and would come back to bite them extremely hard if they actually were successful at getting such rules passed.
This is the exception that I mentioned at the top of this response.
I have nothing more to say about this because what you claim about that specific application is correct.
dialogue is a much harder case to win a copyright case for because you have to be able to make the case that your dialogue was also unique enough to not be something commonly said, and thus actually copyrightable.
All a programmer (probably a bunch of them) has to do is write a slick front end and you can tie together an image, voice/music and text/script "AI" generator to feed into each other, so the LLM prompt can do all 3 in sync with another AI co-coordinating to produce a movie. Not easy, but technically doable.
It is part of what actors+writers guild protests were, they can see the writing on the wall, for the whole movie making business.
No need for employees in a well setup prompt machine, at that point you don't even need a CEO (can replace C-Suite with AI too), the shareholders can vote on what the next hit movie can be about, using five words or less (title of the movie, essentially). The AI will probably suggest titles rather than the shareholders.
Such a situation might manifest 10-15 years away. All business goals align with cost reduction to enhance profits currently.
However the movies won't be good, they just be rehashes of old things that the AI predetermines sells well due to past data. Mostly betting this process will back fire, as I explained in first post, it'll be so formulaic it'll make everyone ill, just watch the old stuff instead?
They are working on AI that can actually problem solve proper advanced maths, which has been a challenge that has yet to fall but probably soon will. If they can pull this off, AI will make itself more efficient than humans could.
It's all about getting the stuff to "good enough". Which may or may not happen, the compute/power/cost requirements might be ridiculous to pull it off. Most of the big corps are gambling on it all working out in the end, they are all in.
Good post.
but I'll add that : the generator does not have creativity of it's own (I actually know some ways to program this, and it's thanks to a particular meme that people overlooked the usefulness of years ago) and even if it did have creativity, it would not have the same creativity as specific people do.
Seasoned professionals can also add a polish about the generative A.I. outputs that it might not be able to produce on its own. Whether people will care about that polish or not is another matter.
On the more important A.I. topic than A.I. content generation, AGI may pose an extinction-level threat to us in the future, ...but in the event that it actually values diversity of ideas and perspectives and methods, we may be fortunate enough to both not go extinct and live in a world where the A.I. recognizes value within us that even we ourselves tend to overlook or downplay, which is simply the ability to think differently and approach the same subjects from different angles that the machines might not be inclined to do or might not be able to think about in the ways that we do easily. ...and while AGI is a more serious subject... what I just outlined also applies to generative A.I.
If nothing else, in a world where machines are smarter and more capable than us - as long as they have a shred of human compassion designed into them, we will at least be like cats to them - something that thinks and acts much differently than them which they may or may not see value in keeping around to watch or get small favors from. ...unless its the robots that are meant to be intimate companions that become ASI... then we might be stuck fulfilling our primitive desires instead of anything else, whether we want to or not...
In the interim, (between then and now) there will be things that generative A.I. just can't do because it doesn't have any examples to refer to or train on, and so consequently, anything that a production team wants to make that is outside the scope of what the tool can generate, will need to be created manually by people at least once - which at least results in people doing more unique work with less repetition than ever before. ...but only if they're capable of doing very skilled work.
I estimate (due to the convergence of multiple technologies accelerating inventive progress in multiple domains) that we might see AGI no sooner than in 7 years, possibly even that soon. In which case, we'd bypass your scenario entirely and either go straight to extinction of all life on Earth, or extinction of just humans, or enslavement of humans, or most optimistically we instead go straight to discussions about rights for A.I. that might or might not be sentient.
That last one is last because every item in the list before it is a more immediate concern to get past before we get to that last item.
Anywhere AI started to come around, no one could assume ownership, it belongs solely to the original artist that gets traced.
That's a good point. Please understand that the only reason why I brought up the copyright issues is because I have many frens who are 2D/3D artists. Obviously, they don't like having their hard work to get traced just like that.
As to how it is implemented or how it should be implemented I have no issue so long as it would be beneficial for all my frens.
In a nutshell, they just said that the technology is just too amazing now that you can't blame people for using it.
Kek. It was rhetorical, but actually, a bit pepega question because no matter how good your phone is in capturing your friend's homework doesn't make it okay for you to copy theirs.
There was an artist for DC who got caught tracing covers, and almost as soon as llm content was available he was caught using that too.
It simulates work, many of the tools used by digital artists are automations, its why so much of their output looks similar.
Just watch some speed painting by various artists.
But I think we can all agree that AI isn't all that bad and it isn't that we cannot use it at all.
As long as it solve the copyright issue no one would complain, really. Although you need to remember that insisting that arts created from AI belong to the person that made the prompt won't help, lel.
I've watched a lot's of artist stream and not every digital artist use automation.