GoFuckYourself.com - Adult Webmaster Forum

GoFuckYourself.com - Adult Webmaster Forum (https://gfy.com/index.php)
-   Fucking Around & Business Discussion (https://gfy.com/forumdisplay.php?f=26)
-   -   Tech How much RAM my server is actually using ? (https://gfy.com/showthread.php?t=1307482)

freecartoonporn 01-02-2019 07:25 PM

How much RAM my server is actually using ?
 
my server shows

Code:

free -h
              total        used        free      shared  buff/cache  available
Mem:          125G        43G        16G        1.2G        65G        79G
Swap:          4.0G        2.3G        1.7G

this box has 128 gigs of ram.

am i really using all of it ? or i can move to 64 gigs ram server ?

server is using
mysql innodb_buffer_pool_size = 70G (actually database size is only 20 GB )
elastic 15 gb

thanks for your time.

shake 01-02-2019 07:55 PM

I would install htop, it has a more useful output.

freecartoonporn 01-02-2019 08:16 PM

Quote:

Originally Posted by shake (Post 22389959)
I would install htop, it has a more useful output.

i have htop installed and it shows.

https://i.imgur.com/9wjvSLH.jpg

ghjghj 01-02-2019 08:20 PM

Quote:

Originally Posted by freecartoonporn (Post 22389948)
Code:

free -h
available
79G


:2 cents:

NatalieMojoHost 01-03-2019 10:54 AM

The deal with Linux is: it will try to use all of the RAM in the system for speeding up the filesystem and other things. That's what the buf/cache is.

The shortest answer is - you have 79GB available and 43GB hard in-use, but Linux is taking another 65GB and using it to slightly speed up your system in other ways.

If another process needs that memory, say MySQL or Elasticsearch, it's able to pull it out of that buffer/cache pool and away from the filesystem. But it will pull it from the 16GB free first.

Hope this helps clear things up for you.

kjs 01-03-2019 05:05 PM

If you really want to understand whats happening on the server install netdata.

https://github.com/netdata/netdata

You will have to enable the hooks for whatever your architecture is e.g. nginx/apache php etc.

VPS is even more misleading when it comes to the standard tools because they usually don't include wait times, swaps, connection pools running out, etc.

wankawonk 01-03-2019 06:21 PM

Quote:

Originally Posted by NatalieMojoHost (Post 22390208)
The deal with Linux is: it will try to use all of the RAM in the system for speeding up the filesystem and other things. That's what the buf/cache is.

The shortest answer is - you have 79GB available and 43GB hard in-use, but Linux is taking another 65GB and using it to slightly speed up your system in other ways.

If another process needs that memory, say MySQL or Elasticsearch, it's able to pull it out of that buffer/cache pool and away from the filesystem. But it will pull it from the 16GB free first.

Hope this helps clear things up for you.

solid answer

I would recommend just using top, add up the "free" and "buf/cache" columns and that's how much you have "free". though you should always leave some buf/cache to speed up any potential swapping.

Important elasticsearch detail: If your elasticsearch shard size exceeds the amount of heap space you allocate to it, elasticsearch will get a big speed boost from having memory available for buf/cache. So be careful to monitor that. don't just assign all your buf/cache to (for example) redis because it might severely impact your elasticsearch performance.

freecartoonporn 01-03-2019 07:22 PM

Quote:

Originally Posted by NatalieMojoHost (Post 22390208)
The deal with Linux is: it will try to use all of the RAM in the system for speeding up the filesystem and other things. That's what the buf/cache is.

The shortest answer is - you have 79GB available and 43GB hard in-use, but Linux is taking another 65GB and using it to slightly speed up your system in other ways.

If another process needs that memory, say MySQL or Elasticsearch, it's able to pull it out of that buffer/cache pool and away from the filesystem. But it will pull it from the 16GB free first.

Hope this helps clear things up for you.

Quote:

Originally Posted by wankawonk (Post 22390369)
solid answer

I would recommend just using top, add up the "free" and "buf/cache" columns and that's how much you have "free". though you should always leave some buf/cache to speed up any potential swapping.

Important elasticsearch detail: If your elasticsearch shard size exceeds the amount of heap space you allocate to it, elasticsearch will get a big speed boost from having memory available for buf/cache. So be careful to monitor that. don't just assign all your buf/cache to (for example) redis because it might severely impact your elasticsearch performance.

thank you, i wan planning to move my server to 64 gigs ram server, i guess more ram is always better.

rowan 01-03-2019 08:11 PM

Quote:

Originally Posted by freecartoonporn (Post 22389948)
my server shows

Code:

free -h
              total        used        free      shared  buff/cache  available
Mem:          125G        43G        16G        1.2G        65G        79G
Swap:          4.0G        2.3G        1.7G


I'm not so familiar with Linux, but the fact that you have some swap used, especially a couple of gigs, may suggest that at some point the RAM usage was a lot higher, and the system had to swap out an idle task.

freecartoonporn 01-03-2019 08:23 PM

Quote:

Originally Posted by rowan (Post 22390384)
I'm not so familiar with Linux, but the fact that you have some swap used, especially a couple of gigs, may suggest that at some point the RAM usage was a lot higher, and the system had to swap out an idle task.

maybe because of swappiness set to 60, so it tends to swap more often.


All times are GMT -7. The time now is 10:57 AM.

Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc