Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Just curious: Why do people worry so much about CPU and GPU load? Isn't your system supposed to be able to run both at 100% load for indefinite amounts of time? Like running Prime95 and Furmark for an hour? If that's not stable, then improve cooling or underclock the hardware I'd say. I've never had heat related crashes on my system and I've done lots of GPU or CPU based rendering that maxes them out for extended periods of time.
People saying they want e.g. 50% max load on both CPU and GPU seem to me like saying "I want the game to waste half of my hardware's computational power, instead of fully using it to make the graphics better or more fluid". I don't get it.
Normally when your system isn't way overpowered for the computational task, you should see close to 100% load on either CPU or GPU, depending on which is bottlenecking. If you don't see 100% load, you have room to turn up certain graphics settings. If you see 100% load and don't want that for some reason, first make sure you are enabling frame rate limiting, then turn down graphics quality settings.
Are you sure between the last time you played and now no graphics settings have changed and no updates to the art assets and post effects of the game have been made? Can you get the numbers back in line with what you were expecting by lowering some graphics settings? Was the daytime of both examples that you are comparing the same?
I wish you good luck with finding a solution that works for you!
Thermals, stress, noise from fans, bottle-necking, general wear and tear of system through heavy loads, etc.
Mine is a laptop configuration, these things matter more so in this case, especially the noise produced from the fans, and upgradability (not a word says the dictionary? It is now) is very limited in my case.
Nothing has changed configuration-wise since last playing, bar graphics card drivers. I maintain/clean/check it regularly. Lowering graphics does of course see less usage, but with a great cost to the visuals and atmosphere.
If one has the system overhead and disposable income to be able to be unconcerned with such degradation in performance and added stress over time, brute force their way past it, and are able to replace and upgrade when they please, then good on them.
There are, however, others in life whom are limited in resources, and must do the best with what they have, especially in terms of system longevity.
The point here is that, if I am correct with my judgement of the game's performance, the barrier for entry into the experience, & maintaining a stable gameplay experience, has now become significantly higher compared to before, for exactly the same visuals and experience. And knowing that prior to now it was not nearly as costly on your performance, why should we put up with it?
It's in everyone's best interest that barrier be maintained as low as is reasonably possible.
Ah I see. For notebooks it is a different case indeed. Thanks for the explanation. I have bad memories to owning an old macbook pro that was from a series where the GPU could get so hot, that it melts the solder joints...
It is possible that there are affordable external cooling solutions for notebooks, if you usually game in the same place, like on your desk for example. Cooling the case externally with e.g. passive heat spreaders and a big quiet fan may help reduce temperatures to a level where the internal fans aren't as loud. But I agree that if the game used to "waste" less computations, it should return back to that old level of performance.
Thank you for the suggestion, however you seem to be implying that we, the end user, in the face of this issue. make ourselves work around the problem rather than report & expect the developers to look into and potentially fix a very large performance discrepancy with their product? I sincerely hope not, as that is a slippery slope indeed.
Hmm, thank you. At the very least I hope a lurking dev take heed and look under the hood. The performance differential is too large for it to be random, for me at least.
The only thing I was trying to imply is that if - due to the cooling situation of your hardware - you can't run your gtx 1080 at 100% load, then for all intents and purposes you have something closer to only a 1060 maybe. If you feasibly can improve the cooling, you should. It's better for component lifespan, fan noise and ultimately also the visual quality you can squeeze out of a game.
Cheers
And I haven't noticed any sort of performance problems at all.
I have *not* been running a steady check on resource usage though. I had no reason to, since everything has acted normal. I've actually been pretty impressed with how smoothly it runs.
What are you running to monitor/record your resource usage?
And what sort of system *IS* that? You said it was a laptop didn't you? But with a gtx 1080?
Hello. I've edited my original post in case you were interested. It turns out I am at fault on this occasion, and have said so in the main post.
To reply to your above message: I see. Bar buying a laptop cooling pad (which is not feasible as I don't have an external keyboard), I've done all I can to lower heat. Underclocking and under-volting my CPU, custom power and voltage curve for my GPU. Removed the rear case for more ventilation, regular cleaning of dust, new thermal paste, even to the extent of using paper clips to help increase contact between the die and heat sinks (worked well).
In times of lacking, we do what we can with what we have :-)
Hello. I've added an edit to my original post in case you're interested. It turns out I was at fault in this case. I use MSI/Rivatuner for monitoring. Yes, a custom built laptop with a 1080. Basically desktop parts within a laptop configuration, with few compromises (I think the 1080 has a few select very minor lesser stats compared to the desktop counterpart).
Ah interesting system :) And that kind of explains some things.
Funnily enough, my system has it's similarities. It's an alienware x51 r3, which is a bit like a laptop masquerading as a little desktop pc.
Which I stuffed a 1080 into. lol
Or, which *did* have a 1080 inside of it.
I was having weird issues with conan exiles.
The system had more than enough specs to run the game with no problems, but once in a while, seemingly at random, the system would just randomly reboot in the middle of playing the game.
Big cards in these little cases are notorious for overheating issues, so I did a lot of temp testing, and messing around with extra fans attached to cardboard tunnel-ram intakes taped to the vents on the case and stuff...
But when I started using gpu-z to log the sensors, it never actually seemed to be getting into the red.
Although one thing that *did* show up, was that my card was actually clocking up *substantially higher* than what it's rated clock speeds were.
And after a bit of reading, I learned that these pascal cards aren't really hard-capped at the stated max speeds.
What happens is, if the variables are right -like it's not too hot, power supply seems adequate, no obvious warnings, etc- it would *overclock itself* if it felt the urge.
Which, well, it all worked great, *except* for the fact that whether a card overclocks itself, or you overclock it, running it faster *still draws more power*.
So, what seemed to be happening in the end, was that in various random situations, the card would suddenly decide, Hey! nows a great time to overclock and do something unnecessary, really fast! Weeeeee!
pbbt! reboot.
Because the power supply for the x51 is basically a laptop power supply, which would *theoretically* handle the *normal* load of a 1080.
But apparently not-quite-handle a conan exiles fueled overclocking frenzy.
Which, seriously, the times this would happen, it'd be like... No need whatsoever to overclock. Nothing fancy or special happening at all.
Just like, the 301st time I ran around the corner of a particular building in town while looking for named thrawls, or the 202nd time I opened the chest by my front door.
Anyway, I was going to do some modification to get some more amps to the card, and do some case mods for cooling...
But then I saw that I could get an "alienware graphics amplifier" from Dell for only $149 (at the time. they've gone up a little), which is basically an *external* case for your video card.
It had it's own power supply, it had it's own fans built in; it even included several extra usb 3 ports.
So I ordered that puppy, and within a week or so, fedex had already delivered it to the wrong house...
....~~-_-;
But when I *did* finally get it, I just popped the card in there, plugged it in, and haven't had a single problem since.
The whole pc runs cooler as well, of course, because there's no longer a 1080 blast furnace installed inside the case.
So if there's an option of doing something similar to that with your laptop in a way that won't sacrifice interface speed, it really might be worth a consideration.
I'd even been using an overclocking program that came with my 1080 to *underclock* my card a bit to try and keep it from ever clocking itself up beyond the original specs, but once it was in the external case, I've never had to bother with that since.
Anyway, I wouldn't be surprized if you are getting hit by power delivery problems as well. It's not so much the lack of overall power supply wattage, but rather the amps that can be reliably delivered through the PCIe power cable/connector. nvidia 1080s and amd rx580s were exceptionally hungry for amps through that cable, way moreso than previous generations.
Oh, and when I kept having that problem with conan? I wanted to track down the developers and kick them in the knee! lol
Cause, at the time, it was the ONLY game that was causing it. And it's like, okay, what in the HECK did you guys do to randomly require so much more power than necessary?!?! Noooobody else does that!
So I can kinda understand the feeling ;P
But hopefully you at least won't continue to have the weird extra usage on this game with the different refresh rate. refresh rates can cause some real weirdness on systems. just "fixed" a pc recently that was having issues, by going in to the windows display/monitor properties and changing the selection from 59hz, to 60hz. Suddenly a whole lot of problems just "magically fixed themselves". lol
glad you found the source of at least some of the issues, and thanks for the followup <-- it helps out others who get stuck in here with more questions than answers :D
And way to go on stepping up to point out mistakes you'd found with your original appraisal of the situation.
It shows guts and strength of character, to openly admit to ones own mistakes/misjudgements ;)
Anywho, take care and have a good day, and hope you have no troubles getting any other issues sorted ~~^_^
This got long and complicated, so I'm providing two versions of my reply ;P
TL;DR version:
When playing games, having "headroom" is important. So if a card is being used for gaming, it shouldn't really be hitting 100% at all.
Extended version:
When you say things like: "I've never had heat related crashes on my system and I've done lots of GPU or CPU based rendering that maxes them out for extended periods of time."
It sounds like you are talking about a very *different* sort of rendering.
For lack of a better way to describe it...
I will call the sort of graphics rendering that's done for a game like this "fps rendering", because the goal is to produce as good an image as possible *while maintaining a SEEMLESSLY SMOOTH frame rate of 60 frames per second or more (60+fps)*. <<--that's the goal; whether it gets reached is another matter.
Whereas there is also what I will call "fpm rendering", where the primary goal is NOT frames per second, but the ability to maintain a consistantly pre-determined high-level of quality per frame, *at the cost of real time speed*. Meaning, the frames MUST maintain such a high level of probably-post-processed quality, that the frames cannot generally be *both* rendered AND viewed at "realtime speeds", with the end result being that it may take several seconds, or minutes, or even hours or days, to render EACH frame. So, I'm calling that "frames per minute rendering", as opposed to "frames per second rendering".
Those two types of rendering are *entirely different contexts* when it comes to how the hardware resources are, or can be, utilized.
With fpm rendering, the hardware tends to either run as close as possible to flat-out 100% in order to try to crunch the numbers as fast as possible, *or* at some sort of pre-defined limit (in case some resources need to be left open for other operations to still function, like skype calls, or antivirus quick scans, etc.)
And something to realize about THAT sort of rendering, is that When hardware is REALLY putting 100% into just one task, the system cannot ALSO still put 10% into browsing emails, or 15% into searching the external usb drives for cat pictures... if 100% is being used for one operation, the computer responds like total crud; like it might take 20 seconds or more to register a mouse movement or click.
That is the result of "running hardware with zero headroom".
For a more practical analogy...
Imagine you are going to try to *run as fast as you possibly can*, down a hallway with a ceiling that is exactly the same height off the floor as you are tall.
like, if you are 5 foot 4 inches tall...
How fast do you think you could RUN down a hallway that was only 5 foot 4 inches high?
The problem?
*People bounce when they run*.
So you CANT run at full speed down a hallway you're height, or you'd bash the crud out of your skull, banging your head against the ceiling the whole way, and probably knock yourself out (total crash/BSOD) before making it 10 feet down the hallway.
That's the *literal* version of the problem of trying to run without adequate "head room".
And computer hardware isn't really that much different, because *when processes run on computer hardware, resource usage ALSO BOUNCES*.
So trying to run a pc with 100% resource usage (aka zero resource "headroom") does not actually work worth a darn on ANYTHING that has to be viewed, or listened to, *in realtime*.
It can manage what *appears* to be 100% usage, by essentially *cheating*.
I face that same problem right now trying to listen to my guitar through an amp sim on my computer. It can't *process* the signal from the guitar AND play the guitar *at exactly the same time* in realtime. there's a delay, and trying to devote more resources to the processing, reduces the resources available for the rest of the system to run in real time, aaaaaaand BOOM! soooo many crashes trying to get effectively zero delay realtime audio drivers to work for this.
Anyway, if you were to try and do *real time fps rendering* on hardware that had to run at 100% to produce a good image at 60fps...
you would never *really* get a good image at 60fps, cause computers have to do other things too!
So they need a pretty hefty amount of headroom to perform in a way that *seems* effectively seemless.
Resource handling on Windows works *best* (most efficiently/seemlessly) when you are using %25 or less of any relevant resources. And performs worst (least efficiently, in a less seemless manner) when you are using %75 or more of any relevant resources.
Just think about it.
You are in a game, and in the game, you are in a dark tunnel looking at a wall, and lets say rendering that image of a pretty much unmoving wall is using 25% of your gpu resources...
Now... You hear a sound to your right!
You turn to the right, whipping your gun up and around into a firing position, as you deftly thumb on the barrel mounted flashlight...
As the beam of light swells out to illuminate a cone of visibility through the darkness, whirlwinds of dust, requiring hundreds of their own little physics calculations, twist and swirl about in front of you...
And lets say, all that pretty dust swirling around effect requires a sudden-but-temporary need of *50% more gpu processing power*...
HARDWARE CHECK!
You were using 25% doing nothing...
suddenly you need 50% more...
Total?
%75 percent of available resources required...
%25 percent resource headroom remaining...
The dust begins to settle
%70 resources in use... 30% free
the air begins to clear as more and more dust drifts back down to the floor
%60 resources in use... 40% free
The dust settles almost completely, your light strugles to find form among the shadows of the darkness that it just cannot completely penetrate...
%50 resources in use... 50% free
SomeTHING thrusts forward from the darkness, suddenly being exposed by the light!
%53 extra resources required... *bump!!* -3% available, stutter, lag...
The dust is again sent whirling and twisting into the air with the sudden movement!
%50 MORE resources needed! *BUMP!!* -53% available, *BUMP!!BUMP!!* stutter lag bump CRASH!!!
different types of lighting and physics calculations and other gfx related loads, provide different amounts and combinations of resource taxing in a constantly changing and evolving set of calculations and variables.
So there's no such thing as any linear load pre calculation of any real value...
But suffice to say, if you are staring at a wall in the dark in a game like this, and your gpu is *already* registering a 50% load...
You are *already in trouble*, because that's half your potential headroom gone, without even actually having to do anything yet.
Just think of using 100% as bumping your head, because if you ever ARE using the full 100% while gaming, just WHAT is your card supposed to be able to provide to you, if it needs to put out *more* in order to maintain the same frame rate?
Like if you are at 100%, and another enemy comes charging into view?
What does your card do?
Well... *actually*, if it's a pascal card like a 1080, it MIGHT CHEAT lol. As I mentioned to the op earlier, these cards can (in the right circumstances) *automatically overclock themselves temporarily*. That's without any user interaction, it's built into the cards. So, basically, this kind of card *might* just haul off and give you %115 to cover you for that few seconds it takes to clear the bump.
But, in general, if you are already using 100%, and need more, *the pc cannot go beyone 100%, so it has to slow down to compensate, until it can catch up (until it doesn't need more than 100% to do what is asked of it) again*.
Anyway, this was long, but it's a complicated thing that a lot of people don't fully understand.
That entire concept of "hardware headroom", and the misconception that leads people to believe that if something isn't being used 100%, it's being wasted.
That's completely incorrect in real-time use scenarios, where that "unused" portion of your resources is actually more of a buffer.
The reality of computer hardware, is that it doesn't "get slower" as time goes on.
Like, your old pc doesn't magically "slow down".
Instead, resource demands gradually increase over time, with every update, with every new definition being added to anti virus programs to counter every new potential virus, with every new format that increases real-time useable resolution, with every newer, bigger advertisement, with every fancy new web animation...
It's like "technological inflation", which results in you *effectively* needing more and more resources, just to continue to do what is essentially the same tasks.
Like, a modern anti-virus, might be larger than the entire operating system on an older computer, so there's no way the old hardware can run a modern os AND a modern antivirus at the same time, without stumbling.
So to keep your hardware from "becoming obsolete" as fast, you get the hardware with the largest available *headroom* that you can reasonably afford to justify. Because you NEVER have "too much free resources".
Oh, and btw, common cooling solutions are intended to be good enough for common tasks and *bursts* of extreme resource usage.
Not constant extreme resource usage.
A card like a 1080 isn't expected to NEED to run 100% all the time, because only a donkey or a professional fpm-renderer/bitminer would be running it in a way that it was constantly bumping it's head in the first place, and an professional fpm-renderer/bitminer would be expected to take the extraordinary usage scenario into account when purchasing their card+cooling-solution in the first place.
For GAMING a card should very rarely-if-ever be hitting 100% because hitting the ceiling causes lag/stutter/pauses/etc which defeats the point of turning up graphics settings to the point where they'd hit 100% in the first place.
Anywho, hope that helps explain some stuffs! Take care, and have a good weekend!
~~^_^
Awesome! I'm glad you figured it out. And it sounds like you're already doing a lot to keep your system cool. Well done!
Thanks for the very detailed reply! Lots of good points. I've done a lot of what you call "FPM rendering" and sometimes with 2 GPUs at once. Never crashed my system. I know the pain with amp-sim latency, but again, that should never crash your system. It will sound ♥♥♥♥♥♥ up when you have the sample buffer size too low, but it shouldn't crash.
You have good points about the value of headroom in games, and I'll admit I may have oversimplified a lot of things to keep my post short. As long as my framerate doesn't dip too noticably and I'm not playing a competitive FPS, I'm fine playing with varying framerates between 45 and 60 fps, so I'm keeping my GPU closer to that 100% load. If you value a stable vsynced framerate more, then that's totally valid and you'll need that headroom.
What I'm talking about are situations where e.g. a main menu of a game isn't FPS limited because the dev forgot to set a framerate target. When there's no limit, it'll just render frames as fast as it can. Since the CPU isn't doing much in the main menu, you'll be GPU limited, so the GPU is close to 100% load. When gamers see that, they'll write angry posts about "The game puts 100% load on my GPU in the MAIN MENU, what INSANITY is this??? I can already smell the components melting!!!". Whereas my reaction is that I'd neither care nor notice, since my system is built to safely handle 100% loads. I understand the concerns with laptops, but for desktops I always thought the way I built it was the norm. It's not even fancy, just some fans and good airflow.
The ops post initially struck me as pretty odd, because it sounded as if what was disturbing them, was that their system resources were under a %50 load or something, *not* whether there was any actual performance problem resulting from it. Whiiiich seemed a bit silly (or at least misdirected), as it's not the actual load that matters, it's how that load affects that particular hardware's performance. So I was all prepared to say, that seems a bit off.... but then I saw your reply, and then I was like, well that seems a bit off too...
But for what it's worth, I think we're on the same page overall here; just approaching from different vectors, so to speak ;D
I will add that my experience with what I'd call high-midrange video cards, and lower, of previous generations seems to somewhat mirror what you describe as the experience you've had with your builds. In my experience, lower end cards usually don't really produce enough heat under full load (and aren't really doing enough under full load) to require much in the way of cooling in the first place. (laptops excluded due to lack of decent ambient temp airflow) and decent midrange and high midrange usually have enough in the way of cooling to handle a 100% extended load.
My experience with the 1080 has been a bit of a different experience, partly due to technology changes, and partly due to having to deal with a case that wasn't designed with adequate space/ventilation in the first place. <<--not the actual fault of the card. It's sort of a "hot rod" situation, where I've essentially got a big powerful engine stuffed into a tiny little car. And the op seems to have extra problems due to similar space constrictions.
Anyway, I was actually pretty surprised to hear of anybody having graphical issues, because of the 5 (including me) people I've played this with in the last week, I'm pretty sure we've *all* been running the game flat out maxed, and nobody seemed to be having any troubles at all. (which is what originally drew my attention to the post)
Oh, with the amp sims and crashing...omg...
"I know the pain with amp-sim latency, but again, that should never crash your system. It will sound ♥♥♥♥ed up when you have the sample buffer size too low, but it shouldn't crash." <<-- I want to frame this, cause I soooo want it to be true! lol :D
I *can* say that when I used direct-sound, that did seem to be the case. I don't think it had any tendency to crash when using that technology.
However, it's like you said, trying to reduce the buffer to get something close enough to realtime to play along with it, basically just turned it into a mess.
I think it was really only *fully* stable with a 1024 buffer with directsound. which is like, playing a chord, then waiting till tomorrow to hear what you played. lol ;D
So it would (technically) sound good, but I couldn't play with it, so it made *me* sound worse than I actually am. :P
The *crashing* part seems to come along with using ASIO drivers to try to get the latency down.
Which kind of takes us back a bit to more like the old days, when drivers could interact directly with the hardware.
At least that's what I get, from my limited understanding of it.
But, basically, when messing with ASIO drivers, the system starts getting twitchy and crashy.
It starts wanting me to reboot after every setting change to stay stable.
Unfortunately, I've gotten kind of stuck at that point, and haven't been able to learn or progress much further.
Some subjects I can just keep learning and learning until I eventually figure out enough to make some progress. But ever since I suffered brain damage about 19 years ago, some subjects I just seem to get stuck on, and after a point, trying to learn more, or read more, it's like it all just becomes gibberish, and I start forgetting anything new I learn about it as soon as I sleep. tends to turn it into a matter of luck, as to whether I'll stumble across something that'll click and get me over to the other side of whatever is blocking my brain, rather than any sort of ability or skill.
pbbt! anyway, yeah, if you have any words of wisdom to pass on to me in that department, then hey, trust me, I'm all ears :)
Since ASIO drivers are intended for professional use, they shouldn't be so crashy in general. Are you using a generic ASIO driver like ASIO4ALL, or the one specific to your soundcard? I have a 1st gen Focusrite Scarlet Solo connected via USB2 and use it with 256 buffer size. The latency is low enough for me to not be bothered by it. But it's not 0ms of course. Everything in the chain adds to the delay, including USB.
Well, I wouldn't say that the "professional" label necessarily implies higher levels of tolerance to improper configurations ;P
But I will thuroughly admit that I firmly believe that the crashing is happening due to me messing around with the configuration (in my case MISconfiguration) and ending up with combinations that are causing the hardware level conniptions.
The software I started this project with, is "amplitube metal" which seems to be a sort of trimmed down standalone version of amplitube 4, with less included amps and such.
A friend of mine bought it for me to try to help bring me into this century music-hardware-wise, because it includes a simulated version of my 5150 amp I've been using since they came out back in the 90's.
(I'm getting a bit old to be carrying stacks of REAL amps and speakers down the stairs every time I get the urge to jam in my livingroom lol)
And while I can't remember what all I did to get the signal from the guitar into the pc at a line level the first time... I do know that in my first testing I was using the directsound drivers, and I was outputting through my usual pc audio, which is via hdmi through my yamaha receiver.
That, technically, "worked", but with massive latency/delay.
I'm okay with *some* delay, and I usually run a bit of delay when I'm using effects, but not THAT much delay. It was up to a second or more between striking a string and hear, which just doesn't work.
Uhhh, looks like I have to cut myself off without much more details because the wife has appeared with a lovely roast beef dinner, and as soon as we get that down, we have a gaming session of generation zero scheduled with her, my sister, and a friend, that'll probably last till their bedtime. doh! lol
But I can quickly list what I have available and what I was trying to do.
Hardware-wise, I have a focusrite scarlett 6i6 gen 2
And I have a peavey 6505mh (mini head) that has both xlr and usb outputs.
And the two main things I'd *like* to be able to do (if possible), is play into my pc and use the pc's audio output as my monitors for normal playing.
And, when not playing that way...
To be able to play into my pc and capture the guitar via OBS studio.
Basically I usually stream the games I play, and occasionally I pick up the guitar and do some playing.
Currently the speaker from whatever amp i'm plugged into gets somewhat picked up over my headphone mic, but to do that, I have to play with one earphone half off my head so I can hear what I'm playing, and the guitar sound is totally *dry* (since my trusty old effects processor died ages ago. Got a "new" one coming on monday though! "New", in this case, meaning one that's over 20 years old. lol <-- unfortunately, programming effects processors is another one of those things that my brain cannot seem to relearn after the damage. The knowledge just won't stick, so every newer multi-effects processor I've tried has been completely wasted on me. So I've tracked down the same kind I used to use back in the 90's, in hopes that I won't HAVE to try and *learn* how to use that one -a korg a5-, since *I already knew* how to program that kind *before* my brain was damaged.) with no effects or anything, whiiiiich just doesn't sound as good as it does with effects.
So yeah, it would be nice to be able to play into/through amplitube to use some of it's effects presets. Which I haven't really messed with yet, and might not be able to learn to use either... but since they are all emulated "stomp boxes" then I think I can probably fiddle with them. Because my big block with effects processors, is that modern ones all use arrows and submenues with abbreviated words you have to try to decifer from little tiny displays... It's not like using a good ol' stomp-box with nice *knobs* you can turn for everything, and the words written by the knobs saying what each one does, and nice dedicated sliders... the old a5 i used was more like 4 or 5 stomp boxes stuck together, so programming it, is pretty much like using some old school stomp boxes, which is good for me, because that doesn't seem to throw my brain off like trying to manage an eq via some lcd submenu or some pbbt...
anyway, once the a5 is here, I would have some other options for adding effects, but I like the thought of using amplitube's effects and being able to experiment with it's other amp models and such.
Plus my buddy spent $99 to buy it for me, and i feel guilty for not being able to tell him I'm using it. lol
sheesh...
But yeah, that's what I'm trying to do, but I can't seem to capture the modified audio stream on obs once it goes to amplitube, it's like a musical black hole; I can't get amplitube to play out of the hdmi audio without using "direct sound" drivers which introduce too much lag/latency (theoretically more configurable asio drivers could help with that).
using the scarletts interface, I couldn't figure out how to hear the right things on the pc <<-- cant remember specifics atm. maybe couldn't get it through my usb headset, or hdmi, or something, and/or may have been that I couldn't get the amplitube-modified version (wet version) of the signal... although I did hear a bit over a horribly broken old set of headphones via it's hardware monitor outputs... but I don't have any *good* analog headsets, or analog monitors, and I wouldn't be able to hear the other computer stuffs while streaming if I had to take my headset off to listen to headsets/monitors attached directly to the scarlett...
So it's been a bit of a mess lol
anyway, they are waiting for me so I have to stop typing! But if you have any suggestions/questions based on that mess I've just typed, then by all means, I'd appreciate it :D
btw, what sorts of things were you using your scarlett to do? musically curious ;)