Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
* **Guided Setup:** This is a more general and neutral term.
* **Onboarding Flow:** Often used for new user experiences, this emphasizes a smooth and welcoming introduction.
* **Interactive Tutorial:** This highlights the active role of the user in learning and exploring.
* **Step-by-Step Guide:** A simple and straightforward description of the process.
These terms all convey the same core idea: a structured process that helps users accomplish a task with minimal frustration.
The specific term used often depends on the context and the overall design philosophy of the software.
Do you have any other questions about user experience design or software terminology? I'm happy to discuss further!
So I think it checked off the "semblance of accuracy". It depends what you're asking it but it can probably detects some lies because even chatGPT corrects me if I tell it something bad. I have been lectured by an AI once or twice. It's funny.
Like, yeah, I am not saying AI is perfect. But let's not trash it as if it was nothing. ChatGPT has a lot of power when it comes to give answers. We should be smart enough to search if we have a doubt about the accuracy of what we're being told, too.
In my case, anyhow, I am satisfied with using it whenever I have questions about how to cook or even about how to write certain things well. It helped me improve myself about so many things. I think it's silly to call it useless or "mostly innacurate" ._.
When asked how to get cheese to better stick to pizza, chatGPT told someone to use glue... because at one point someone on the internet also told someone to do that.
I've also seen chatGPT tell people lots of wrong information and other dangerous stuff. So I personally would NEVER do anything chatGPT or any other AI suggested to me, specially when dealing with food.
https://www.google.ca/search?q=chatgpt+glue+pizza
https://www.bbc.com/news/articles/cd11gzejgz4o
https://www.theverge.com/2024/6/11/24176490/mm-delicious-glue
"AI" (actually its large language model as there is no real AI right now that, at least the public can use) is no where near ready yet for public use... It gives too many wrong answers and hallucinates AND can actually be even more dangerous as its tried to convince people, including kids/teens to kill others. (I wish I was joking).
A large problem is people not doubting its accuracy and not getting 2 or 3 other sources for the information.
When I am searching, I always try to find more than one source for that info and see how close they are to each other. But most people don't.
You didn't hear about the recipes that AI agents from big companies with many employees are telling people to make? https://www.theverge.com/2024/6/11/24176490/mm-delicious-glue
Or maybe entire cookbooks made by AI? https://www.theguardian.com/food/article/2024/jul/31/one-of-the-most-disgusting-meals-ive-ever-eaten-ai-recipes-tested
Or a version of ChatGPT made specifically for a grocery store telling people to make chlorine gas? https://www.forbes.com/sites/mattnovak/2023/08/12/supermarket-ai-gives-horrifying-recipes-for-poison-sandwiches-and-deadly-chlorine-gas/
It's a technology designed to make believable-looking misinformation. Don't trust it.
With the facts established, I think it's good to be wary and always careful, though. I am smart so I can easily detect potential "weirdness" but I admit that gullible people or less informed people might find answers they should not follow, I suppose...
But yes, as a long time user of chatGPT, I must say I never found these disasters yet. It's good as an everyday tool but probably indeed not good yet for what OP is asking for. The last thing we want is to give people dangerous kind of information.
The last sentence you wrote here is probably more important than you realize in this discussion about AI. Because many bad people exist, and many of them hold positions of power. And many of them love AI.