Does anyone knows how much memory we have allocated to GPU, and how to change it?

I couldn’t see anywhere a configuration file to change the amount of RAM we can reserve for GPU. If this is dynamically allocated (GPU auto takes more RAM when it needs) or we have to say it in a config.

This could help a game/emulator runs if we increase this amount, so maybe this could be an interesting thing to know.

Here they explain how to reserve for sunxi kernel, but our u-boot doesn’t have a config file:
https://linux-sunxi.org/Cedrus/libvdpau-sunxi

There are mentions about reserving memory to CMA in kernel that seems to be our case. So in kernel we could allocate more memory setting:

CONFIG_CMA=y
CONFIG_CMA_SIZE_MBYTES=64

64MB is what we have in our kernel, I changed it to 256MB, but I don’t actually know if the GPU is taking it or not. I still see the same memory available for use when I ran free:

cpi@clockworkpi:~$ free -m
              total        used        free      shared  buff/cache   available
Mem:           1004          58         788           9         157         915
Swap:             0           0           0

But now I can see that CMA changed:

cpi@clockworkpi:~$ dmesg | grep cma
[    0.000000] cma: Reserved 256 MiB at 0x60000000
[    0.000000] Memory: 765360K/1048276K available (7168K kernel code, 407K rwdata, 1760K rodata, 1024K init, 255K bss, 20772K reserved, 262144K cma-reserved, 261832K highmem)

So, how I can see the memory reserved for GPU? Is it possible to change?

Best regards

2 Likes

I found that CMA (Contiguous Memory Allocator) automatically give the amount of requested memory to the devices (GPU includded) that needs to allocate memory. In this case we don’t need to give a portion of a memory to the GPU; A portion of the memory that wouldn’t be available to the user, even if the GPU is not using everything that we allocated to the GPU.

The CMA will take care of giving to the GPU the amount that it needs, and giving back to the user (for other processes) when the GPU doesn’t need the memory anymore.

That’s why we always see the total memory when running free -m

This is a good document that explains it a bit:

We have 64MB available for CMA in our kernel. I think it is safe to give 128MB instead. I didn’t notice any difference in performance. I still don’t know if the GPU is actually using more memory when we give more space to CMA.

We could ask it to the lima driver development guys

3 Likes