![]() |
You doing too much bro
|
I suggest you use https://runcloud.io/ ;)
|
Quote:
|
Quote:
|
Ok, but for now I have moved only 2 of my sites, on this server I would like to put about 6 other sites very similar, as structure and visits, can I continue to add sites?
|
memory usage
$ free -m I have 12G RAM on this box Code:
barry@paragon-DS-7:~$ free -m Or. you can run this -- repeats the command every 15 seconds and dates and logs the results. Code:
$ while sleep 15; do date >>my.log; free -m >> my.log; done Code:
$ cat my.log or learn how to use the top command and sort your processes. in ssh/terminal Code:
$ man top In the host's dashboard there are usage graphs. |
Quote:
I see that you like to give away money, will you give me a little? tnx |
That's just one LINUX box on my office desk -- that is running Xorg ltdm GUI. It's a single processor 4core workstation -- not a server.
|
Quote:
OMG !!! |
fuck you
|
Quote:
|
Please-- put me on ignore -- asswipe
|
Is it possible to block or throw out Magneto664 from the discussion?
It's a nice discussion, I'm very sorry that the focus on the main topic is lost because of an idiot |
Don't worry about it ...
GFY is full of little attention whores. That is what the main forum is for to hand someone their ass in a hat. |
Quote:
pray to Jesus or whatever you do. I hope nobody will hack your amazing servers - you give almost everything in plain text wait....... sorty I hope somebody hack your servers. you give almost everything in plain text : bye.. and sweet kisses |
Umh...
Today's day of Cronjob... I make a cronjob for each site, at a distance of one hour from the other once a week. Cronjob upload new content to MySQL, add new links to Sitemaps etc. Some cronjob last a few seconds 5-10-20 others are sometimes longer 2-3 min. Today I began to see strange behaviors. For example trying to install zip Code:
root@ubuntu-2gb-blr1-14-04-3:~# sudo apt-get install zip or occasionally visiting a site http://porn-update.com/temp/Schermat...2002-26-28.png Updating the page all back to work Even nixstat from a few days, it's kinda hard to show me the charts. I have added some site in the past few days, even from 2-3000 visits a day and are not all yet... the two most performing sites I would like to put on this server I have not moved them yet. Are problems that can be solved with some optimization of MySQL (eg resizing tmp or caches) or is it already time to buy a larger server? |
There is a problem writing to the disk.
$ df and check the disc usage. here is an example of a new vps Code:
root@ds12-ams-2gb:~# df |
When login
Code:
System information as of Wed Sep 6 01:25:04 UTC 2017 Code:
root@ubuntu-2gb-blr1-14-04-3:~# df I have however MySQL tables with hundreds of thousands of lines. I also use this type of query for the research: Code:
$query_photo = "SELECT SQL_CALC_FOUND_ROWS ".$select_fields.", MATCH(title, description) AGAINST('".trim(addslashes($_GET['query']))."') as score FROM ".$prefix."photo WHERE MATCH(title, description) AGAINST('".trim(addslashes($_GET['query']))."') ".$sort_by." DESC LIMIT ".$start.", ".$step.""; I also use APCU, memcached and Opcache, can they create the problem? I did zip even with sudo, same result Code:
root@ubuntu-2gb-blr1-14-04-3:~# sudo apt-get install zip |
Today also came this:
Code:
An unexpected error occurred: It is RAM? |
I also found this
Code:
root@ubuntu-2gb-blr1-14-04-3:~# sudo df -i |
And this
Code:
root@ubuntu-2gb-blr1-14-04-3:~# sudo mkdir /var/mysqltmp What are you eating all this space? |
Maybe now it works...:banana
A lighting crossed my mind while I was pooping... And I remembered having activated it in its time: Standard HTTP Caching https://www.digitalocean.com/communi...n-ubuntu-14-04 In which I remembered that there were some lines, like these: CacheLockPath /tmp/mod_cache-lock CacheEnable disk I had added them to the virtual host file in the first site, just to try, and when I added the new domains (copying the config file) I copied them to all new sites... I removed them from the .conf files and for the moment everything seems to be back to work. I restarted the server, but I probably have not freed all the space used by these caches, I have to understand how to do it... and disable them permanently... Check in the next few days to see if everything works... |
What does updatedb.mlocat?
It is kinda that I observe this graph on digitalocean the second service that sucks resources is always updatedb.mlocat. http://porn-update.com/temp/Schermat...2002-46-18.png I searched a bit in Google, I found many guides on how to disable it, or remove it, or delete it from the cronjob. But after many searches I have not yet figured out what it does and if it is a necessary service... Someone can tell me what it is and what it does? And if it can be disabled? Strange thing, I see it only on Digitalocean, Nixstat does not show it http://porn-update.com/temp/Schermat...2002-53-32.png |
Locate is a system tool used like find
update db.mlocat is its database used Code:
barry@paragon-DS-7:/$ locate apache|grep error Code:
barry@paragon-DS-7:/$ apt search mlocat sudo apt autoremove mlocate sudo apt purge mlocate if you really want to remove mlocate. Code:
barry@paragon-DS-7:/$ locate mlocat cat /usr/share/doc/mlocate/README About ===== mlocate is a locate/updatedb implementation. The 'm' stands for "merging": updatedb reuses the existing database to avoid rereading most of the file system, which makes updatedb faster and does not trash the system caches as much. The locate(1) utility is intended to be completely compatible to slocate. It also attempts to be compatible to GNU locate, when it does not conflict with slocate compatibility. New releases will be available at https://fedorahosted.org/mlocate/ . Installation ============ Before installation it is necessary to create a group called "mlocate" to allow hiding the contents of the database from users. When updatedb is run by root, the database contains names of files of all users, but only members of the "mlocate" group may access it. "locate" is installed set-GID "mlocate", no other program should need to run with this GID. Portability =========== mlocate should be portable to all SUSv3-compliant UNIXes, although it is currently tested only on recent Linux distributions. Bugs ==== Please consider reporting the bug to your distribution's bug tracking system. |
So maybe for now I keep it... anyway CPUs we still have...
The very serious problem that I thought I had solved and instead I still is the space... I know that all my sites together (on cpanel other server) weigh about 10-12 Gb and in this new server I have not yet transferred the two heavier. Today trying to decompress a file of about 300Mb I received the message out of space. And I can't even figure out if it's true that space is exhausted... When login: Code:
System load: 0.31 Processes: 105 https://i.imgur.com/0hLq0Kj.png Nixstats monitor https://i.imgur.com/WHwOgRw.png https://i.imgur.com/dYsjQ3d.png Code:
root@ubuntu-2gb-blr1-14-04-3:~# df -h Code:
root@ubuntu-2gb-blr1-14-04-3:~# du -max / | sort -rn | head -20 Code:
root@ubuntu-2gb-blr1-14-04-3:~# sudo du -sxm /var/* | sort -nr | head -n 15 I no longer have the problem of the other day when the sites showed only errors, now seem to remain online, but the space on the server is always exhausted... |
root@ds12-ams-2gb:/home# du -sh
or were your web root is /var/www ? du -h will be more verbose check the webroot are you caching any content? |
Code:
root@ubuntu-2gb-blr1-14-04-3:/var/www/html# du -sh That I know, the only cache systems currently installed are APCU, memcached, Opcache CDN is the only folder with about 15000 photos, but it is the one that I can not extract because it is finished space |
(15000*100K)*1000
1,500,000,000 /CDN is 1.5 GB maybe why are there no users shown? /home/user what is 'finished space' supposed to mean? root should be able to access all locations. |
http://porn-update.com/temp/Schermat...2016-07-43.png
No, CDN is still empty, I managed to load the zip, but when I try to extract, it extracts some photos and then "Space finished". Also extracted, CDN weighs about 309 Mb http://porn-update.com/temp/Schermat...2016-09-40.png The strange thing is also that some counters see exhausted space, other half empty... Maybe some counters don't see the data in some caches? I'm still thinking about those damned "Standard HTTP caching", which saved here /var/cache/apache2/mod_cache_disk, and actually in this folder there is still something... http://porn-update.com/temp/Schermat...2016-17-29.png http://porn-update.com/temp/Schermat...2016-30-08.png I would try to empty it/delete it, but can I do it with an "rm", or will it destroy the server? |
Umh...I did this
Code:
root@ubuntu-2gb-blr1-14-04-3:/var/cache/apache2/mod_cache_disk# du -sh File: /etc/default/apache2 Code:
### htcacheclean settings ### But something seems not to have worked properly... |
I launched this:
Code:
root@ubuntu-2gb-blr1-14-04-3:~# htcacheclean -p/var/cache/apache2/mod_cache_disk -l 1 Something he did Code:
System load: 0.0 Memory usage: 3% Processes: 70 |
http://porn-update.com/temp/Schermat...2017-44-31.png
Much better than before, but the time still seems a little high... Will it improve over time? Can I improve it in some way? |
If you are using Varnish you are caching pages -- and taking up space -- see if you can purge the pages little used on a daily basis?
If you request images from other servers you may have slow page load times depending on the number of images requested, the geolocation and peering to your server(s) and the current load on the server you are requesting images from. Fewer images per page might help. Using jQuery lazy load in your HTML might help also. The initial load time should stat out better. |
Varnish I installed it, but it practically never worked, on Digitalocean Varnish and Apache quarrel for the port 80 due to some symbolic link. The thing was resolved on 16.04, but a little abandoned to itself on the one of 14.04.
Currently it seems that or start Apache or start varnish, together do not want to work Unfortunately almost all images of my sites are on the sites of content producer, on which I have no control. I tried long ago to create the thumbnails and host them on my server (in the CDN folder) using CloudFlare for cache and CDN, but I lost about 80% of the visits... Is still more or less active here: Big Boobs Hard Pics | Big Boobs, Huge Boobs, Huge Tits, Busty,, but I have not recovered all the visits. For some time I have installed a lazyload http://www.lezdomhardtube.com/lazysizes.min.js, but not that of jquery, because only the jquery framework weighs practically more than the code of my sites. Of this lazyload I am not very convinced, because it does not load all the photos of the page, but it loads more than those visible in the window... seems to do his job, but a little too much... and I have not noticed significant changes between first and after the implementation of this Lazyload... As soon as I have free time I try another, just to try. Always thanks for the answers |
I'm trying to focus on the headers for speed up.
Because for example I have enabled mod_deflate, but I do not understand if it is compressing and what is compressing... I have all its nice rules in the htaccess files of each site, but I have no idea what it is, or where it is, the configuration of my server. Code:
<IfModule mod_deflate.c> With the server however I saw that there are configurations on the level of compression, but I can not even find the configuration files, in the sense that I find the files, but inside there is nothing of everything that is spoken on the guides... Another thing for example I would like to add CharSet: UTF-8 to the Content-type header I have seen guides where they say they are in the httpd. conf file that I do not find or in the Apache configuration files, and in mine there is nothing about it... I'm not understanding anything... This is the current configuration of the headers of my sites, definitely missing something, but I can not figure out how to add or edit.. (except via httaccess) http://porn-update.com/temp/Schermat...2018-38-14.png What and how can I configure server headers globally, without using individual htaccess? |
So... in a couple of days I realized something... but I realized that practically there is nothing to understand...
Deflate already seems to do everything alone and work very well as it is... And even less touches better is... I have however found that some tools like Pagespeed, Gtmetrix, varvy, say that compression is not enabled because of these 2 files: search.js lazysizes.min.js I added to /etc/apache2/mods-enabled/deflate.conf "text/javascript" (idea found searching) and search.js seems to have resolved, but continue to tell me that compression is not enabled for lazysizes.min.js, perhaps because of the "min", which maybe does recognize the extension. I could make a change in my sites and remove the min from the filename, but it would take a long time, and surely sooner or later the thing would repeat in the future. Is there a way to fix it permanently via server, the failure to compress .js and .min.js files? P.S. For the Charset Utf-8 I realized that it is in this file:/etc/apache2/conf-available/charset.conf And just enable this: AddDefaultCharset UTF-8 P.P.S. Considering that almost all images of my sites come from external resources, can it be a good idea to enable compression even for images via deflate? Or would it completely kill my server's resources? Does deflate also affect images from external resources? P.P.P.S. In Pagespeed I noticed for the first time Pagespeed module for Apache (I had never noticed before not having a server). Can be a good or bad idea? Usually I do not trust too much of BigG because he has a tendency to take much more than he gives, and I do not want to give free the resources of my server to him. He doesn't really need it. |
It might be just easier to spend another $20/mo and expand the server's resources?
Quote:
that is a remote server you cannot control. use $ sed 's/\.min//g' test it first then sed -i.bk -i [in place edit].bk [.bk backupfile] I like to make a backup directory with copies in case I fuck-up Code:
$ mkdir backedup; cp * backedup Code:
$ find . -name "*.min.js" Code:
$ find . -name "*.js" -o -name "*.min.js" to find script references is recursive so start in the right location near just above the files. Code:
$ grep -rni '.js' |
But I know the file, is the famous lazyload (sorry if I have not written it before, I did not thinking about)
But I can not understand why it is not compressed by deflate |
I was thinking of installing Fail2ban, but I saw that it reads Apache errors.
So much to look at, I opened Apache errors I noticed that my Apache logs are full of this: Code:
[Mon SEP 18 06:39:16.678185 2017] [Core: ERROR] [PID 31667] [Client 180.76.15.6:29891] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use ' LimitInternalRecursion ' to increase the limit if necessary. Use ' LogLevel debug ' to get a backtrace. Code:
RewriteCond %{HTTP_USER_AGENT} ^.*MJ12bot [NC,OR] I'm afraid that installing Fail2ban would read continually new errors and probably he will drink all the resources of my server... (also the fact that each visit generates a log line I do not like so much) Looking in G, it seems to be some url_rewriter problem. Looking for logLevel debug I found this: mod_rewrite - Apache HTTP Server Version 2.4 But I think I didn't understand something because: Code:
root@ubuntu-2gb-blr1-14-04-3:~# tail -f error_log|fgrep '[rewrite:' What am I missing? How should I use this? |
Quote:
use find or locate or read the configuration file for that domain Code:
$ cd /etc/apache2/sites-available Code:
$ (cd /etc/apache2/sites-available && grep -i 'error\.log' <domain config file> Code:
$ tac <path/to/file/error.log> |less Code:
$ tac <path/to/file/error.log> egrep -i 'this|or|that' |less |
All times are GMT -7. The time now is 02:51 PM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc