Steamをインストール
ログイン
|
言語
简体中文(簡体字中国語)
繁體中文(繁体字中国語)
한국어 (韓国語)
ไทย (タイ語)
български (ブルガリア語)
Čeština(チェコ語)
Dansk (デンマーク語)
Deutsch (ドイツ語)
English (英語)
Español - España (スペイン語 - スペイン)
Español - Latinoamérica (スペイン語 - ラテンアメリカ)
Ελληνικά (ギリシャ語)
Français (フランス語)
Italiano (イタリア語)
Bahasa Indonesia(インドネシア語)
Magyar(ハンガリー語)
Nederlands (オランダ語)
Norsk (ノルウェー語)
Polski (ポーランド語)
Português(ポルトガル語-ポルトガル)
Português - Brasil (ポルトガル語 - ブラジル)
Română(ルーマニア語)
Русский (ロシア語)
Suomi (フィンランド語)
Svenska (スウェーデン語)
Türkçe (トルコ語)
Tiếng Việt (ベトナム語)
Українська (ウクライナ語)
翻訳の問題を報告
Or any human authors who have read and learned from other authors!
oops...that is pretty much all of them, ever.
Your perspective is an important one, as it highlights the natural tendency to become enamored with the potential of new technologies, sometimes beyond their current capabilities. It's true that Large Language Models (LLMs) like ChatGPT have limitations and don't match the often sensationalized expectations. They are not sentient beings and lack understanding or consciousness, which can lead to outputs that may be irrelevant, incorrect, or contextually disconnected, depending on the input and the situation.
In the context of gaming, integrating ChatGPT could offer novel experiences, such as more interactive storytelling or dynamic NPC responses. However, expecting it to fully understand human emotions, exhibit nuanced judgment, or spontaneously generate complex strategies akin to a human player would be an overestimation. It would be more accurate to say that these models can enhance certain aspects of games where text-based interactions are pivotal, rather than revolutionizing gaming as a whole.
The historical examples you mentioned, like early reactions to nuclear technology and computer networks, show that there's often a gap between initial public expectations and the practical unfolding of a technology. In many cases, the reality is shaped not just by the technology itself, but by how it's applied, how other technologies evolve alongside it, and how societal, ethical, and economic considerations are addressed.
It's also crucial to acknowledge the 'failures' or challenges in AI development, as they often pave the way for improvements, guiding researchers and developers on the limitations they need to overcome. These instances are generally more familiar to those within the industry, but sharing these setbacks and the learning involved can provide a more balanced view of the technology for the general public.
In conclusion, while it's beneficial to remain excited about the prospects of AI and its integration in various fields like gaming, it's equally important to temper expectations with a realistic understanding of its current stage of development. This balanced view ensures that we appreciate the technology for what it is now while continuing to strive for the breakthroughs that might be achievable in the future.
Yes, when they plagiarize. But all authors learned from other authors, just like ChatGpt does. ChatGPT has safeguards against plagiarizing, something human authors may lack.
Human learning does not work the same way that automated inference does. They are very different processes. Feeding proprietary IP into an algorithm and publishing the finished results is generally outside the scopes of permissible use for protected IP. As is plagiarism. Neither humans nor engineers have carte blanche to steal other people's IP just because they have some cool use in mind for it.
"Different" doesn't mean it wanders into plagiarism territory any more than a human author does. Maybe less so. You are just trying to draw the line in such a way that it will include AI.
A human may learnd from a couple soruces and be more heavily influenced by them, and borderline plagiarizing for example. AI learns from pretty much everything under the sun. SO I would argue it strays just a bit further away from plagiarism.
Try to get ChatGPT to tell you the lyrics of a song, something easily gotten from the internet. It will go a few lines then say "Thats all I can quote you, because of copyright issues".
A human infant learning to speak from a human family is not at all the same as a team of corporate engineers training an algorithm on "mystery-meat" datasets.
How do they learn to read and write, though?
lol. So only homeschooled peeps.
I have read tons of copyrighted books, I have them lining my shelf.
We all learn from each other. You have learned from copyrighted books and materials.
You are trying to make a distinction without a difference, especially when you say "your mother" taught you. So what?
This is a silly argument, frankly.
And yes, it's a silly argument. Like any debate point hatched up in a corporate marketing department, it's skin-deep and brain-dead assertion. LLMs do not learn like humans do. It's a specious analogy that was introduced to water down growing resentment on the part of IP owners for the way their work was misappropriated to design these models. Along the way, it actively misrepresents the learning process of both humans and algorithms.
Good thing your mother never gave you any books. Having to pay IP royalties to teach her child would have been unfortunate.
Reading a book is not a copyright violation. Passing off regurgitated work from another author is. If you've ever read Helen Keller's autobiography, it contains a very vivid account of the time she first plagiarized another writer and the consequences she faced for it.
I have never met a human child who learned to read by reading every book ever published. That is literally the opposite of how reading works. You learn to read, and then you read books. And even as a child, I paid for my books. And I didn't sell my writing.
They are not the same thing.
Personally, I managed to find some time for the library, but I guess it takes all sorts.