GreenJelly Jan 25 @ 2:32am
2
Stop the AI Reporting Requirements
As a software developer with 38 years of experience, I can attest EVERY game engine on the planet is NOW being maintained, and created with the help of Generative AI. AI is integrated in our IDE's, and we are using it EVERYWHERE. In addition, almost every game engine has AI built into it, and has had fuzzy logic for MANY years. Also, AI is being used in administration, data analysis, and many other area's. All of these usages are not required in disclosure.

In fact, I challenge you to find a single company not using Generative AI. Yet these reporting requirements only target Indie developers. This is leading to INDIE developers getting review bombed, and we are tired of it. STOP THIS. Remove these anti-AI review bombs, and remove these questions. Or instead, please check EVERY game on Steam as being built by AI.

Case in point: Microsoft is a key player in the AI development, yet all of their games are not marked as using AI. Go, look at a Microsoft game, is it AI marked?

2nd Case in Point: Lord of The Rings (the movie) used AI in the production of the movie. Their games, which also used scenes from the movie, also do not report generative AI usage. This is wrong, they openly admit to this, yet they don't disclose.

We're tired of the review bombs, the death threats, the angry competitors bombing us because we're honest and report usage, while they don't.
Last edited by GreenJelly; Jan 25 @ 2:41am
< >
Showing 121-135 of 164 comments
Originally posted by McFlurry Butts:
I mean with deepseek being readily available and opensource, AI should be expected to be in literally everything at this point. It's kind of absurd people are so against it, it's a tool anyone can use with their phone now.
My code is also readily available and open source, and yet there's not a thousand instances of Ben Lubar's Dating Site running on the internet. How come?
Originally posted by McFlurry Butts:
I mean with deepseek being readily available and opensource, AI should be expected to be in literally everything at this point. It's kind of absurd people are so against it, it's a tool anyone can use with their phone now.
A "tool" doesn't completely do the job for you.
I heard someone say that even your spelling check is considered artificial intelligence. It wouldn't surprise me if it's just a buzzword.

And your opinion is probably right, but there is nothing I can do to help you in any way.
Last edited by AustrAlien2010; Jan 27 @ 10:16pm
Originally posted by Boblin the Goblin:
Originally posted by McFlurry Butts:
I mean with deepseek being readily available and opensource, AI should be expected to be in literally everything at this point. It's kind of absurd people are so against it, it's a tool anyone can use with their phone now.
A "tool" doesn't completely do the job for you.
Then don't a game you don't like.
Originally posted by crunchyfrog:
Originally posted by Doctor Zalgo:

No, I'm replying to Boblin the Goblin. In fact, I actually go out of my way to draw a clear line between genAI and other forms of AI (making the distinction that lord of the rings AI wasn't gen AI).

Ed: for clarity, I'm not saying your interpretation is unreasonable, just that its not what I was talking about.
The thing is AI is useful, but for tiny niche jobs, exactly like managing things like hair tresses in Lord of the Rings.

I use AI occasionally if there's some audio track I need to demix and strip a certain track out of it.

It's great for things like this - simple tricks that would either be impossible for a human to do, or would take too long.

But the kicker is, they're really simplitic and repetitive in nature. That's why they excel. Because it is simple.

But when it comes to LLMs and more shall we say, encompassing things, AI is ♥♥♥♥ and will likely never work. Simply because there's a whole host of problems some of which can't be solved.

There's the issues like giraffing where AI has no ability to use context and thinks things like giraffes are more prevalent than reality, because there's more photos of them online.

But worst of all is things like AI giving wrong results which it does constantly because it cannot judge between satire and humour and truth. It will NEVER get roudn this simply becuase humans can't infallibly either.

In these reagrds Ai is a dead duck.

There's the issues like giraffing where AI has no ability to use context and thinks things like giraffes are more prevalent than reality, because there's more photos of them online:

This is wrong as reasoning AI's are capable of using tools to fact check their understanding using internet searches (and are even capable of prioritising and deconflicting information).


But worst of all is things like AI giving wrong results which it does constantly because it cannot judge between satire and humour and truth. It will NEVER get round this simply because humans can't infallibly either:

I've dumped excerpts from scripts for various youtube videos into reasoning AI's and they've been able to discern humour and satire from serious content like documentaries.


If you haven't used LLMs in the last year or so, I encourage you to give something like deepseek a try because it is orders of magnitude more capable than what you might have seen in GPT3.
Originally posted by Boblin the Goblin:
Originally posted by McFlurry Butts:
I mean with deepseek being readily available and opensource, AI should be expected to be in literally everything at this point. It's kind of absurd people are so against it, it's a tool anyone can use with their phone now.
A "tool" doesn't completely do the job for you.

Doing the job for you don't make something not art. Just as a director or dance choreographer.
Last edited by Doctor Zalgo; Jan 27 @ 10:28pm
Originally posted by Doctor Zalgo:
Originally posted by Boblin the Goblin:
A "tool" doesn't completely do the job for you.

Doing the job for you don't make something not art. Just as a director or dance choreographer.
Both of those are still actually doing work and aren't putting in a few words ti have a machine do everything for them.

Try again.
Originally posted by McFlurry Butts:
I mean with deepseek being readily available and opensource...

A wide range of offline generative AI and LLMs have been a thing for several years already, but that doesn't mean people have to like them or want them in everything.
Originally posted by McFlurry Butts:
I mean with deepseek being readily available and opensource, AI should be expected to be in literally everything at this point. It's kind of absurd people are so against it, it's a tool anyone can use with their phone now.
I wouldn't be against it if it wasn't bad for the environment and if AI wasn't being trained illegally on copyrighted material without permission.
Originally posted by GreenJelly:
As a software developer with 38 years of experience, I can attest EVERY game engine on the planet is NOW being maintained, and created with the help of Generative AI.
The problem with AI isn't that it's immoral or anything. The problem is that it sucks.

"Why would I bother read it if you couldn't be bothered to write it" is the common question. And the answer is usually that you can't tell because AI is really good at looking passable. The thought of being jumped a few hours into a game with the realization that zero effort was put in is horrifying.

That's why AI needs to be labelled. Because AI is low-quality but really good at seeming higher quality. If indie devs have a problem with AI being labelled then they can hire a human artist or just cobble together some jpegs and release "An Airport for Aliens Currently Run by Dogs".
Originally posted by Boblin the Goblin:
Originally posted by McFlurry Butts:
I mean with deepseek being readily available and opensource, AI should be expected to be in literally everything at this point. It's kind of absurd people are so against it, it's a tool anyone can use with their phone now.
A "tool" doesn't completely do the job for you.

Not only that but Deepsek is like so many other Chinese things- an utter con.

The reason this is causing a stir is because of it's claims. It does a job and it's claimed to be done at a fraction of the cost.

The problem is that China is a country of facades and shortcuts. So them saying they did it a fraction of the cost is likely completely untrue.

It's either done somewhat cheaper because it's stolen (or based on stolen data like almost every Chinese product of note) or just a downright lie. But investors being dumb, always fall for this ♥♥♥♥, because greed clouds them.

Give it a short time and we'll get more info on what's ACTUALLY happening with Deepseek.

But even then, as you rightly point out AI is mostly a red herring, and just like so many new things, idiots who don't understand it (even at the top levels, like rich business people and investors) LOVE to profess it as a magic wand solving bloody everything.

We've seen it many times with things over the decades.

The fact is AI is pretty good at small narrowly focused jobs. I always bring up one use I have in audio work - demixing. It's also good at handling and going through large data sets and picking things out quicker than humans can do it.

But with things like LLMs it's an utter dead duck and will NEVER work. There's many reasons, like the procedural problems such as giraffing, but the big one is that of it never being able to detect the difference between satire or jokes and fact.

Humans can't do it perfectly, so it will never do it. The thing people need to remember is it's ARTIFICIAL intelligence, not actual intelligence. It is purely mimicing certain things, like how to construct and manipulate words to form coherent sentences and so on. But it cannot THINK or even anything in close approxiomation to it.

I'm thoroughly enjoying idiots watching AI and professing it to be the next big thing and not understanding a damned thing how it works and I greatly enjoy watching them make fools of themselves. Especially big businesses who misapply it.
Originally posted by crunchyfrog:
Originally posted by Boblin the Goblin:
A "tool" doesn't completely do the job for you.

Not only that but Deepsek is like so many other Chinese things- an utter con.

It doesn't matter if that specific LLM is a con or not. There have been other alternatives available for offline use for several years, including several trained on the same data as Gemini precursors, and those used by Chat GPT. Updates to these models are released fairly frequently, too. You can find a fairly large collection of them on Hugging Face.
Last edited by Chika Ogiue; Jan 28 @ 6:34am
Originally posted by Chika Ogiue:
Originally posted by crunchyfrog:

Not only that but Deepsek is like so many other Chinese things- an utter con.

It doesn't matter if that specific LLM is a con or not. There have been other alternatives available for offline use for several years, including several trained on the same data as Gemini precursors, and those used by Chat GPT. Updates to these models are released fairly frequently, too. You can find a fairly large collection of them on Hugging Face.

And so what?

They're all LLMs or a variation therefore. Large Language Models working on language.

They ALL suffer the same problem - they CANNOT determine satire from fact.

Demonstrate hwo they can or ever will given that humans cannot do it reliably.
Originally posted by crunchyfrog:
They're all LLMs or a variation therefore. Large Language Models working on language.

They ALL suffer the same problem - they CANNOT determine satire from fact.

Okay, so they can't determine one thing, yet, without a lot coaxing (and large enough token pool to retain what they've learned). But that doesn't mean they aren't good at other things. At the end of the day, at their current state, they are a tool. And like all tools, they need to be wielded by a competent human who understands their capabilities and can correct when they slip.

But that in no uncertain terms means all LLMs are "cons". And as I've said previously in this topic, if you've ever played a game that contained text that was not written in your native language, you've played a game that leveraged an LLM during its localisation.
lx Jan 28 @ 7:59am 
Originally posted by crunchyfrog:
...They ALL suffer the same problem - they CANNOT determine satire from fact. ...
ambiguity...
its not just it.
now anytime you post in forums choose your words carefully ;) as they might be your last.


-peep-peep
disclaimer: its a quote, advice. not a threat.
-end of peep-peep
Last edited by lx; Jan 28 @ 8:03am
< >
Showing 121-135 of 164 comments
Per page: 1530 50

Date Posted: Jan 25 @ 2:32am
Posts: 164