Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
You have to double right-click on a custom node to be able to edit it.
Anyway, I just posted in another thread about this "issue", which seems to be an actual part of how the game is supposed to be played. I'm not really a fan of it though, because it really feels counter-intuitive.
This just blew my mind.
Thank you for that piece of information. I do not remember being told that in the game.
I think I saw it in tutorial, did not pay much attention to it, but it was there:p
What I mean is, is there some kind of performance benefit to it, rather than just saving time by reusing old code?
But to answer your question: Usually, there is no performance advantage by outsourcing code into a library. In theory, avoid code redundancy might reduce the memory footprint of code, which gives some additional speed. But, often, it is the other way around: its harder to optimize code across libraries.
When thinking of the game as an educational game, motivating the user to reuse code might be not a bad idea. But the realization is bad. First: it is extremely hard to understand how much time every module needs. Second: Many custom nodes consists of a single module but are still faster than directly placing the module.
My suggestion would be to remove the time advantage of custom nodes and DLL nodes. Instead, one could add some puzzles where the user has to solve the puzzle with custom nodes only.
A better explanation would be this:
It's the difference between software and hardware, custom modules are" finished" and can be put in a hardware solution. They work much faster than a software solution.
That's what Cern does for example to filter the data from supercolider. They use triggers to filter the unimportant data, that's the only way to manage the data during an experiment
Key example of this would be code optimization where you get rid of a intermediate data structure thus reducing time and space complexity. I would say custom nodes should be renamed to optimized nodes.
If you build in the cloud, the value of data locality is even more clear as the providers ( Amazon, Azure, Google) often charge to ferry data around in one form or another.
As Kaibioinfo mentions, though, accessing data from one library to the next is generally identical if the data is available in memory.
I would think of the data moving as completed operations. Each of the operators has a time cosr associated with them and so that's a nice representation.
What's not clear, though, are the details on using custom nodes! I had no idea they offered a bonus, and I had no idea that you could edit them!
I am playing with ny 6 yr old son to get him interested and so in the process of letting him engage, I guess I missed the detail.
You guys should call that out more. I was milking the basic nodes like a moron and I realize now I could've accomplished more with less.
Cheers to a great game, love that its laid back. One thing I feel is missing is a discussion on tactics to use. That and really using the UI to drive me to use custom nodes.
I do also think there's a bug in the DLL code though when running blocks of different speeds, and I don't think you can get a gold in the dictatorship level without exploiting it.