0.9.0 resource/mem leak with DDS?

Questions about the LÖVE API, installing LÖVE and other support related questions go here.
Forum rules
Before you make a thread asking for help, read this.
Post Reply
bizziboi
Citizen
Posts: 57
Joined: Sat Apr 16, 2011 9:24 am

0.9.0 resource/mem leak with DDS?

Post by bizziboi »

Hi,
I have a simple program that determines (somewhat) available video memory by repeatedly creating images and holding a reference to it until I run out of memory.
I call this in succession for various scenarios (wanted to see the gain in using DDS).
The scenarios are "Load a png", "Create an image of the size of said png", and "load the same image as DDS"
As expected I can load a lot more of the DDS as of the PNG - 623 of the png (it's pretty big).
When I create the imageData and image I can create 624 of them. When I do it after loading the PNGs (and force freeing them) I can create 623. Not sure, but seems there's a slight resource leak there.
When I load the DDS (of the same image) I can load 2430 of them.

However, when I try to load or create the PNGs after loading -and freeing- the DDS's I can only load 371 of them, so *something* is being held in memory when destroying the DDS.

Find code below. The PNG I use is 500x583 32 bit. The DDS of the same image is DXT5. Unfortunately I can't share the image.
Seems my free VRAM is in the 800mb range (yikes)

If anyone has an idea, or if I'm missing a step, please let me know.

Code: Select all

	LoadImagesPNG = function(settings)
		local images = {}
		settings.total = 0
		settings.count = 0
		while true do
			local img = love.graphics.newImage("test.png")
			images[#images+1] = img
			local data = img:getData()
			local size = data:getSize()
			--print("size ",size)
			settings.total = settings.total + size
			settings.count = settings.count + 1
			collectgarbage()
		end
	end

	LoadImagesDDS = function(settings)
		local images = {}
		settings.total = 0
		settings.count = 0
		while true do
			local img = love.graphics.newImage("test.dds")
			images[#images+1] = img
			local data = img:getData()
			local size = data:getSize()
			--print("size ",size)
			settings.total = settings.total + size
			settings.count = settings.count + 1
			collectgarbage()
		end
	end


	CreateImageData = function(settings)
		local images = {}
		settings.total = 0
		settings.count = 0
		while true do
			local imgdata = love.image.newImageData(500,583)
			local image = love.graphics.newImage(imgdata)  	
			imgdata = nil
		
			images[#images+1] = image
			local data = image:getData()
			local size = data:getSize()
--			print(settings.count,"size ",size)
			settings.total = settings.total + size
			settings.count = settings.count + 1
			collectgarbage()
		end
	end

	function handle()
		print("yeah, had an error")
	end
	
	-- Keep calling a function till we err, to catch out of memory.
	-- Just to see how much vram we have. 
	TestGfxMemory = function(message, fn)
		print(message)
		local settings = {}
		local status, error = pcall(fn, settings)
		if not status then
			collectgarbage()
			for i,v in pairs(settings) do
				print("",i,v)
			end
			print()
		end
	end


TestGfxMemory("loading images in dds", LoadImagesDDS)       -- 2430
TestGfxMemory("loading images in png", LoadImagesPNG)	-- 623 ( but not after loading DDS )
TestGfxMemory("creating images in memory", CreateImageData)  -- 624  ( but not after LoadImageDDS )
User avatar
slime
Solid Snayke
Posts: 3132
Joined: Mon Aug 23, 2010 6:45 am
Location: Nova Scotia, Canada
Contact:

Re: 0.9.0 resource/mem leak with DDS?

Post by slime »

I'm not sure about your issue, but that's not a great way to measure VRAM. GPU drivers can (and will) store the contents of the image in main RAM if it won't fit in VRAM, and/or they can defer allocating the memory in VRAM until the image is actually used when drawing, etc.

Not only that, but the actual amount of video memory taken up by a texture can be much larger than the size in bytes of the raw pixel data (or compressed DXT data) used to create it. Drivers are free to organize the layout of textures as they see fit, and sometimes this means using more space in order to achieve some other gain. You have no control over when/if this happens, and no real way to measure it.

Also, calling love.graphics.newImage("test.png") does more than just load the image into VRAM. It first loads the data from disk into a FileData, then decodes/decompresses the FileData (from PNG in this case) to raw bytes in RAM as an ImageData, then it loads the ImageData into an OpenGL texture. The ImageData is kept around by the Image, but the FileData is eventually cleaned up by Lua's garbage collector. The decoding step also probably allocates some extra temporary RAM.

Compressed DDS files are similar but there's a parsing step instead of a decoding step, so less temporary memory is allocated (and less memory in general because the texture stays compressed.)

If you load the PNG into an ImageData once (love.image.newImageData), and the DDS into a CompressedData once (love.image.newCompressedData), and re-use those to repeatedly load all the Images, then it will use much much less RAM.
bizziboi
Citizen
Posts: 57
Joined: Sat Apr 16, 2011 9:24 am

Re: 0.9.0 resource/mem leak with DDS?

Post by bizziboi »

Thanks for the heads up, but I am aware of reusing the image data, but it was not what I was trying to test.

*But it's besides the point.*

Fact is, if I create PNGs till memory is exhausted, then destroy them all, and then try again I can create the same number of PNGs.
If I create DDS's till memory is exhausted, then destroy them all I can only create half of the number of PNGs - something is not cleaned up when destroying DDS's. Whether it's love2D or driver or crazy fragmentation (although *I* am not allocating anything in VRAM in the meantime - *something* is leaving significant data (we're talking a 400 meg loss here, that seems an awful lot for such a clean allocation/deallocation loop)

I guess I'll look at the source over the xmas break.
User avatar
slime
Solid Snayke
Posts: 3132
Joined: Mon Aug 23, 2010 6:45 am
Location: Nova Scotia, Canada
Contact:

Re: 0.9.0 resource/mem leak with DDS?

Post by slime »

bizziboi wrote:Thanks for the heads up, but I am aware of reusing the image data, but it was not what I was trying to test.
I'm not sure I understand. Right now both methods for loading something share a large amount of memory and CPU usage that's unrelated to what you're testing. I was suggesting eliminating it to narrow down the source of your issue. I also explained that drivers are a massive black box and this is a really bad way to test VRAM (because it won't test VRAM.)
bizziboi wrote:we're talking a 400 meg loss here, that seems an awful lot for such a clean allocation/deallocation loop
It's not really clean. There are a lot of variables you don't/can't track.

Can you narrow down the issue to something that doesn't try to allocate several gigabytes? It will be easier to debug what the problem is if the test case is simpler and less resource-intensive.
bizziboi
Citizen
Posts: 57
Joined: Sat Apr 16, 2011 9:24 am

Re: 0.9.0 resource/mem leak with DDS?

Post by bizziboi »

The problem is that I needed to allocate several gigabyte (actually it was 1.3 gig of unique image data. Game jams tend to lead to such crazyness). In a real life scenario I can easily work around it, but it was the leak (memory or resource, indeed hard to tell) that worried me.

Like I said, I'll try to debug it myself - I have the depot locally. The problem might be related to the number of assets, but the game I was working on actually ended up not running on a test machine because VRAM was exhausted so that's what I had to test.
While testing that I decided to try DXT5 to see what I could have then, and that lead to my discovery that after creating and destroying the PNGs I was in the same memory situation as when I started and with DDS I am not - there`s no real other way to repro this and the code I posted is actually a pretty decent test case.

I can't make the test smaller as I have no way to query the memory status except seeing how far I can fill it up. Let alone query fragmentation. It could technically be fragmentation, but it would be pretty extreme.

Fact remains:
- If I create 600 PNGs and destroy them then after that I can create 600PNG's again. (I'll see what the scenario is if I change to half size PNG in the second loop. If I can create close to double the amount it would confirm what I think, if not - I am wrong and have to investigate.)
- If I create 600 DDS and destroy them then after that I can only create 300PNGs. It would be some pretty strange fragmentation.

So either DDS leaves memory, or - this could be too - the handles are not freed up and there's a limited amount of handles. This I can verify by repeating the test with smaller images.

If you have suggestions for narrowing down the issue I am definitely interested because I don't see one.

Edited to add:
Hmmm, looking at the code, it seems that is indeed all is above board. I will report back on this but can't tell how soon I have time to delve deep. Then again, I am the only one that has a problem with it ;o)
Post Reply

Who is online

Users browsing this forum: Ahrefs [Bot] and 66 guests