Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
And your opinion is probably right, but there is nothing I can do to help you in any way.
There's the issues like giraffing where AI has no ability to use context and thinks things like giraffes are more prevalent than reality, because there's more photos of them online:
This is wrong as reasoning AI's are capable of using tools to fact check their understanding using internet searches (and are even capable of prioritising and deconflicting information).
But worst of all is things like AI giving wrong results which it does constantly because it cannot judge between satire and humour and truth. It will NEVER get round this simply because humans can't infallibly either:
I've dumped excerpts from scripts for various youtube videos into reasoning AI's and they've been able to discern humour and satire from serious content like documentaries.
If you haven't used LLMs in the last year or so, I encourage you to give something like deepseek a try because it is orders of magnitude more capable than what you might have seen in GPT3.
Doing the job for you don't make something not art. Just as a director or dance choreographer.
Try again.
A wide range of offline generative AI and LLMs have been a thing for several years already, but that doesn't mean people have to like them or want them in everything.
"Why would I bother read it if you couldn't be bothered to write it" is the common question. And the answer is usually that you can't tell because AI is really good at looking passable. The thought of being jumped a few hours into a game with the realization that zero effort was put in is horrifying.
That's why AI needs to be labelled. Because AI is low-quality but really good at seeming higher quality. If indie devs have a problem with AI being labelled then they can hire a human artist or just cobble together some jpegs and release "An Airport for Aliens Currently Run by Dogs".
Not only that but Deepsek is like so many other Chinese things- an utter con.
The reason this is causing a stir is because of it's claims. It does a job and it's claimed to be done at a fraction of the cost.
The problem is that China is a country of facades and shortcuts. So them saying they did it a fraction of the cost is likely completely untrue.
It's either done somewhat cheaper because it's stolen (or based on stolen data like almost every Chinese product of note) or just a downright lie. But investors being dumb, always fall for this ♥♥♥♥, because greed clouds them.
Give it a short time and we'll get more info on what's ACTUALLY happening with Deepseek.
But even then, as you rightly point out AI is mostly a red herring, and just like so many new things, idiots who don't understand it (even at the top levels, like rich business people and investors) LOVE to profess it as a magic wand solving bloody everything.
We've seen it many times with things over the decades.
The fact is AI is pretty good at small narrowly focused jobs. I always bring up one use I have in audio work - demixing. It's also good at handling and going through large data sets and picking things out quicker than humans can do it.
But with things like LLMs it's an utter dead duck and will NEVER work. There's many reasons, like the procedural problems such as giraffing, but the big one is that of it never being able to detect the difference between satire or jokes and fact.
Humans can't do it perfectly, so it will never do it. The thing people need to remember is it's ARTIFICIAL intelligence, not actual intelligence. It is purely mimicing certain things, like how to construct and manipulate words to form coherent sentences and so on. But it cannot THINK or even anything in close approxiomation to it.
I'm thoroughly enjoying idiots watching AI and professing it to be the next big thing and not understanding a damned thing how it works and I greatly enjoy watching them make fools of themselves. Especially big businesses who misapply it.
It doesn't matter if that specific LLM is a con or not. There have been other alternatives available for offline use for several years, including several trained on the same data as Gemini precursors, and those used by Chat GPT. Updates to these models are released fairly frequently, too. You can find a fairly large collection of them on Hugging Face.
And so what?
They're all LLMs or a variation therefore. Large Language Models working on language.
They ALL suffer the same problem - they CANNOT determine satire from fact.
Demonstrate hwo they can or ever will given that humans cannot do it reliably.
Okay, so they can't determine one thing, yet, without a lot coaxing (and large enough token pool to retain what they've learned). But that doesn't mean they aren't good at other things. At the end of the day, at their current state, they are a tool. And like all tools, they need to be wielded by a competent human who understands their capabilities and can correct when they slip.
But that in no uncertain terms means all LLMs are "cons". And as I've said previously in this topic, if you've ever played a game that contained text that was not written in your native language, you've played a game that leveraged an LLM during its localisation.
its not just it.
now anytime you post in forums choose your words carefully ;) as they might be your last.
-peep-peep
disclaimer: its a quote, advice. not a threat.
-end of peep-peep