Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Trainless megabase is perfectly valid for performance, but to really max it out you want to play in creative or start building it in a completely new location, lest you will never finish because of the time it takes to sort out the spaghetti.
I mean logically thinking if the gaps aren't changing why waste processing power on them. All you have to do is track the first group that can move. All the groups after the first will always be exactly the same distance behind the first moving group.
Logically thinking a string of 1s is going to be a lot easier to calculate with than a random string of 1 and 0.
https://factorio.com/blog/post/fff-176
and the visualizations suggest the include gaps in the aggregate.
Which makes sense, since an empty spot on the belt is not really any different from a filled spot -- the difference in behavior is when a sequence of moving items meets a backed up section, or with an inserter that's willing to pick up anything.
Dosh had a lot of transport lines time usage because he had a lot of splitters etc. But with a long belt that has no interactions whether there are gaps or not should not make any difference.
In the fff quoted they say they "store" the distance between items. If the distance is always 0 that sequence has a lot less info than one with multiple gaps of random length. No matter what operation you are doing on it, even if it is to simply "store" it, it takes more calculations.
At the end of the day we can speculate about how they are doing it, but since neither of us are modders who actually know it is kind of pointless.
The point being that inserters have to track what is nearby, so they need to keep the groupings small so that the compute code related to inserters don't have to do much work.
So, the expected performance is that a 100 tile long belt with a bunch of inserters at the beginning and end would perform a lot better than a 100 tile long belt with evenly spaced inserters.
(this section is longer and technical, so I've highlighted where sushi belts are mentioned)
That's not true. If the operation is, for example, "adjust the last element, and maybe remove it", the number of other elements in the data structure are irrelevant, because you aren't doing any calculations on them.
That's what "O(1)" means in that article. It's a computer science term that there is some fixed upper bound on how much work an operation is, no matter how big the data you're working with is.
As an example -- one probably rather pertinent to this discussion -- a "double-ended queue" (deque) is a list of items two operations on it: you can insert a new item at the head of the list, or you can remove an item at the end of a list. And for typical ways to implement a deque, those operations take O(1) work. (depending on the implementation, that could be per-operation or averaged over all operations)
(for the optimization they want to do, one would also need to be able to look at and modify the item at the end of the list)
---
I imagine what they are storing is not individual items, but instead runs of items -- the data is "type of item and how many". In that case, adjusting the number is going to be quicker than moving items between data structures.
(and empty stretches of belts could use the same structure: with item type being "empty")
So a sushi belt loses in this regard since there aren't stretches of identical items, so the game has to shuffle things around rather than just increment/decrement counters.
There's also memory usage, which can affect how your computer performs in subtle ways. A sushi belt would require more memory. Memory usage can affect how your computer performs in subtle ways -- more memory and frequent shuffling around of items are both ways that typically make things worse.
I actually do programming so these exercises are still useful. (especially since my interests tend to lie in high-performance stuff)
It also has a point if one is interested in UPS optimization, since having ideas about how the game is implemented would help one devise theories (which one could then test) about what designs would be better for performance.
That's not to say there couldn't have been a difference. That posts talks about "inactive" belts which takes even less computation. The two types are stopped and empty. Well, a lot of games "cheat" where a steady state thing is not actually simulated. Empty, stopped and full belts could all have been treated the same by the game, all of them being inactive, and the moving of pieces on the full belt could have been not actually pieces moving but just a segment replaced by a looping animation and items going into it would literally teleport to the end.
My point is that you cannot just refer to logic, because people made the game and have a choice in how they do things. But after reading that post you linked I of course accept what they say and that there is no difference between a belt with gaps and one without, unless the gap is 200 pieces long or more.
A sushi belt could be different, but also not. If the size of the data (eg a segment that costs 2 byte to describe vs a segment that costs 2MB to describe) doesn't matter then you can just as easily store different items in it as all the same items.
Okay, but this quote was about what you are doing with the items at the edges of the list.
Feel free to disregard if you're still thinking about comparisons with doing something with every item on the list rather than just what's at the ends, since this optimization only becomes relevant when working with just the ends becomes a significant fraction of the work.
(unless cache effects are at play, but that's a whole 'nother can of worms)