Running vanilla skyrim here.
1920x1080x32@144hz.
Maxed setting's, forced 4x sgssaa, forced ao, -15 mip bais, force af disabled, forced trilinear filter..., forced vsync, etc etc.
It's using 100% of the gpu and 100% of the vram...
I'll have to lower the aa to run it in 3d too.
Some games I can go 8x, some 4x, or really 2x... :\
In 3d, in alot of games my fps halves, vmem consumption doesn't go up with this though it seems.
Might be a speed issue with my system though, in 3d mode codwaw starts slower.
In some ao modes, sr3 takes much longer to load.
Using:
306.79-quadro-tesla-win8-win7-winvista-32bit-international-whql
The later drivers are supposed to be faster but not unusable to me without a working lod bais lock at -15.
Last edited by NEOAethyr; 02-01-2013 at 03:23 AM.
Surround resolutions aren't really "high resolution" in the sense that they have higher PPI or lower pixel size. The 6000 is spread across 5' of monitor space.
If that was crammed into a 30 or 40" display you'd be right that higher PPI reduces the need for AA. As it stands, 27" 25X14 panels are the best we have for that:
http://www.prismo.ch/comparisons/desktop.php
Intel 990x/Corsair H80 /Asus Rampage III
Coolermaster HAF932 case
Patriot 3 X 2GB
EVGA GTX Titan SC
Dell 3008
fail to understand ? you cannot comprehend meaning/the purpose of vram ? very sorry man you and some here are badly out of touch when it comes to vram/vram usage/etc
just because its running doesnt mean theres enough vram to buffer fully
not just skyrim every game
The min is what you want to be high,avg is useless when you drop to 15fps.The more vram buffer the better faster the streaming/transfer from sys mem/hdd = smoother gameplay
My doom3 uses up all vram before using all gpu(80%core/100%mem-@720p).Raising aa/af would raise gpu usage and then start swaping because it doesn't have enough memory.
A 8mb texture could kill a 1gb gpu @ 1080p.
We want the best experience possible and they just want to play,but to say what is useless for a game that can be modded doesn't make sense.
From what I've read some will cache as much in VRAM as there is available, but having less doesn't necessarily impact on performance.
Of course, when you throw 4 and 6GB cards into the mix, more latency/framerate variation of AMD cards, variations in how well brands run individual games, the picture is a whole lot less clear than "More VRAM is always better.".
AFAIK there is no accurate way to measure how much VRAM a game "needs" to run well at this point in time.
Intel 990x/Corsair H80 /Asus Rampage III
Coolermaster HAF932 case
Patriot 3 X 2GB
EVGA GTX Titan SC
Dell 3008
Intel 990x/Corsair H80 /Asus Rampage III
Coolermaster HAF932 case
Patriot 3 X 2GB
EVGA GTX Titan SC
Dell 3008
i would like to think that more than .1% of people with high end cards heavily mod games like skyrim
1.5g gtx480 in skyrim with mods at 1080p = ~3fps for me simply due to a sever lack of vram
lighten the amount of texture mods\vram used and fps went back up to 60 with a few stutters
move to a card with 2g vram and you just get a few stutters without the severe fps drop
i would also like to think that new games will finally start adopting 64bit only engines and using over 3g ram and 1g gpu mem by default
but if everyone just keeps running a 32bit os with 4g ram 1g gpu it will never happen
TJ08-EW 6700k@4.7 1.375v - Z170-GENE - 2x8g 3866 16-16-16 - 1070@ 2088\9500MHz -Samsung 830 64G, Sandisk Ultra II 960G, WD Green 3tb - Seasonic XP1050 - Dell U2713 - Pioneer Todoroki 5.1 Apogee Drive II - EK VGA-HF Supreme - Phobia 200mm Rad - Silverstone AP181 Project Darkling
3770k vs 6700k RAM Scaling, HT vs RAM, Arma III CPU vs RAM, Thief CPU vs RAM
Your sig says you have a Phenom II and a 1GB 4850, no offense intended, but it sort of contradicts your statement. Not saying my 990X and 2GB GTX680 SLi and 2500K and HD7970 (both with 25 X 16 panels) puts me in the .1% either, but apparently all types of users "live here".
It's sort of a moot point to talk about VRAM limitations, because no matter how much you have, you can exceed it in a variety of ways. So if I say "Nuh uh! I can run my Skyrim at 25 X16 with the texture packs because I have 3GB VRAM on my 7970!" I'm just a guy waiting to hear "But I can run Skyrim in 3d at 57X10 with 4XAA because I have 4GB of VRAM on my 680!".
It's a pointless argument that can never be won, and as noted, we have no good tool for measuring VRAM use.
Intel 990x/Corsair H80 /Asus Rampage III
Coolermaster HAF932 case
Patriot 3 X 2GB
EVGA GTX Titan SC
Dell 3008
As reported on AnandTech, it's an overclocked 690 apparently: http://forums.anandtech.com/showpost...&postcount=431
Asus P8Z77-V Deluxe / i7 3770k @4.5GHz / 8GB Samsung Green Series 30nm @2133MHz / Phanteks PH-TC14PE / Asus GTX Titan @1.1GHz / Corsair AX1200 / Audio-gd 11.32 Dac-Amp & Audeze LCD-2
You have several tools. Not all of them are perfect in all situations, but the fact remains there are several which produce relatively good results as long as you have the faculties available to interpret the results.
Here are 3 off the top of my head.
Fraps
Rivatuner and derivatives (msi afterburner for example)
Asus GPU Tweak
Then some games (Far Cry, Serious Sam, etc) also have in-game methods.
As a 3rd option you can visually test and mentally comprehend your benchmarks. If you get 3fps at 1920x1080 with 4xAA but 30fps with no AA then you've probably either run out of VRAM or the game engine is poorly designed for streaming of compartmentalized data.
All along the watchtower the watchmen watch the eternal return.
I think it was Skymtl that wrote a good thread here or on his site as to why tools like Afterburner are not good for recording memory use in games. Not sure if it was entirely due to the "game will take as much as is available" issue only, or if there were other reasons.
Maybe he'll chime in, but that is why I said that. If there are good tools to record memory use, I'll gladly retract.
Intel 990x/Corsair H80 /Asus Rampage III
Coolermaster HAF932 case
Patriot 3 X 2GB
EVGA GTX Titan SC
Dell 3008
I could go on and on about memory use reporting in Afterburner, Precision and whatnot.
They are all accurate to a certain extent since they are actively reporting the amount of memory being allocated for usage by a given application. However, many times it is the game which is the issue.
Back then, I gave Shogun 2: Total War as an example since many were reporting that it was eating up a massive amount of memory. And, it was. However, it didn't matter at the time whether you had a 1GB card or one with 4GB; memory was always a bottleneck.
The reason for this was pretty convoluted. The game wasn't "flushing" texture information from the buffer which resulted in an exponential memory footprint growth as you played the game. For example, when you started the game, the front screen options popped up with a rendered in-game sequence playing behind. At high settings the memory footprint here was ~550MB if memory serves me correct. However, starting the actual game (campaign map) resulted in that 550MB remaining rather than being flushed and then the 250MB of campaign map information was layered on top of that. If you started a battle, that 750MB to 1GB of data was added as well for a total of ~1.5GB of memory usage or more.
Using higher settings made this issue even worse.
As a result of this compounding problem, the game's forums were filled with people saying their high end graphics cards weren't enough to handle Shogun 2. The only way around this was to save the game mid-battle, restart the application and directly load your save again, skipping the campaign map.
However, the real reason wasn't memory use but rather a lack of proper coding on Creative Assembly's part. I've seen the same issue present itself in the initial releases of Assassin's Creed III and COD: Black Ops. The only reason this didn't get noticed is these games' small initially memory footprint.
Developer optimizations play a huge part as well. If a developer doesn't implement the necessary memory optimizations, even the highest-end graphics card will struggle with bandwidth constraints.
Some people have been spewing nonsense about VRAM requirements since the introduction of the GTX 680, just refrain from taking their opinion on the matter into consideration.
STEvil is right, with the available tools it's relatively trivial to discern vram usage; the problem (which i'm guessing Skymtl highlighted)is that VRAM occupation alone isn't enough to draw conclusions about it not being enough, because if texture buffers don't always get flushed and you can end up with a much bigger occupation than necessary: occupation should always be cross referenced with framerate with both a larger and smaller framebuffer.
The one and only situation where 2GB aren't enough is skyrim with excessive and pointless texture mods at 5760x1080+, or BF3/Crysis at silly SSAA+MSAA settings, at which point memory bandwidth and performance in general wouldn't keep up anyway.
For people who require that and the idiots who look at VRAM as a performance measure, there will always be properly overpriced cards with twice the amount of memory.
Last edited by JaD; 02-04-2013 at 06:32 AM.
Yeah, the current tools have a few flaws, but I wouldn't say anything insurmountable to tell "close enough" for all real purposes. However, there are a lot of issues with just taking that number and running with it. Jad's conclusion is correct here, and SKYMTL illustrated why very well.
I heard from someone with connections that Titan will indeed be a 15 SMX part. Would be great if true.
I'm looking forward to buying one of these as a replacement for my GTX 580, if performance is really good. And I'm happy that they put so much memory on the card. Sure, right now it's not needed. But I want to keep this card a few years. 2 GB would not be enough IMHO, it's one of the reasons why I did not want a 680. Look at Crysis 3. In a video with one of the developers he shows a scene and says: this is 2.5 GB of textures. So while 6GB is maybe a bit overkill for now, I think the card should at least have 4 GB. Nvidia mostly sellling 2GB cards is only holding back progress. If all cards had 4 GB, I'm sure it would be used by now.
Asus Z87 Deluxe, 4770K,Noctua NH-D14, Crucial 16 GB DDR3-1600, Geforce Titan, ASUS DRW-24B3ST, Crucial M500 960GB, Crucial M4 256GB, 3 X Seagate 4TB, Lamptron FC5 V2 Fancontroller, Noctua Casefans, Antec P183 Black, Asus Essence STX, Corsair AX860i, Corsair SP2500 speakers, Logitech Illuminated Keyboard, Win7 Home Pro 64 bit + Win 8.1 Home 64 bit Dual boot, ASUS VG278H
All 2.5GB won't be sent to the GPU at one time. The vast majority of it will be pre-loaded in cached form. With a proper texture caching hierarchy, several dozen GB of texture data can be present within a scene without bogging down a 1.5GB card. Actually, caching efficiency is one of the integral aspects of today's GPU designs.
Without wanted to go even further, increase memory on size need be followed or simply need hardware behind it.. Cache size and speed , register, etc etc. We have see many GPU's with 2x the amount of ram, as 4gb instead of 2gb initially where performance is not increased for a simple reason, the hardware behind it will not properly use this amount of GDDR.
CPU: - I7 4930K (EK Supremacy )
GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
Motherboard: Asus x79 Deluxe
RAM: G-skill Ares C9 2133mhz 16GB
Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0
Bookmarks