![]() |
no in bash.sh the ; at the end of a statement is not needed
var=something (declaration) like JavaScript $var beneath (a declared variable) like echo $var the caps are what i did they could be in lowercase too -- but bash .sh is case sensitive in a terminal $ dothis; sothat; is this; && dosomethindgood | (<pipe>) to the next statement |
So,
I did that... Code:
#!/bin/bash Code:
/var/backup/mysql_backup: line 15: +%s: command not found |
STARTTIME=(`date +%s`)
try like this and the time will be in epoch seconds ENDTIME=(`date +%s`) |
It works, but the result is kinda odd...
Code:
Elapsed_time: 1507671766-1507671705 I tried to put quotes, parentheses etc etc, but he does not want to do it... can we do this last thing too? It takes more time for this little thing than to configure all the server... :) |
Maybe in $()
TOTTIME=$($ENDTIME-$STARTTIME) Do the math -- the sum is in seconds :) barry@paragon-DS-7:~$ bc <<< 1507671766-1507671705 61 seconds |
Says:
Code:
/var/backup/mysql_backup: line 93: 1507756208-1507756139: command not found Code:
TOTTIME=`expr $ENDTIME - $STARTTIME` Now I'm worried about those odd quotes... In PHP when I find those quotes it means that there was a copy paste error from the HTML and nothing works anymore. So I have the habit of removing them as soon as I see them and replace them with a normal apex... in sh instead it seems to be fundamental... I surely have removed someone thinking they were a error...:error I shouldn't have done any damage, because everything seems to work, but maybe I'm going to look for the original script and I see if there was someone...:upsidedow P.S. It's strange how we can install an entire server, and then the simplest things make us crazy...:) |
these are called backticks
bc is a terminal calculator program apt install bc man bc Quote:
Quote:
Like our @array =(<FILENAME>); |
There's one last thing that scares me a lot
Load http://porn-update.com/temp/Schermat...2023-13-39.png Much of that red is due to the phase of moving sites and all the importing error of those damned databases. Also the other server at the beginning had very red, then slowly it is normalized. This is taking a little more... But what sounds strange to me is that going to see the detail of the server, you do not understand why there is all that red. http://porn-update.com/temp/Schermat...2023-15-54.png The CPU rarely arrives at 90%, the memory is a bit chubby but it works, disk there is plenty, errors or special problems there are none... The sites are running well, fast, without interruptions, or visible slowdowns... Cpu sometimes says "stolen" even if it is working maybe at 70%, and already this is odd. But it is the usual load to give more worries, sometimes even 4-5, I also saw 7 in the days of cronjob (they are still synchronizing many data due to the lack of cronjob in the other servers) What does it actually indicate load? And how much do I have to worry? On a scale it goes from "quiet, goes all right" to "shit the server is going to explode, run away all before it's late, shit we'll die all :eyecrazy", where am I? |
Your problem is your PHP script and the MySQL daemon (server). Software for your application; or,
If you look at the times of the peak usage and grep those times in the server access logs you may find that bing is indexing too many pages too fast -- you can place a directive in the robots.txt User-agent: bingbot Crawl-delay:$v 5 10 see https://www.siteground.com/kb/how_to..._eng ine_bot/ https://www.bing.com/webmaster/help/...ntrol-55a30302 Slow bing down -- don't Disallow Bing they bring good converting traffic and the sell their PSaaS or indexed database to Yahoo and other search engines. You may find Baidu is indexing too many pages too fast -- block them at your firewall I have had luck that way Porn is illegal in China and you won't sell legit Chinese buyers either. # Free IP2Location Firewall List by Search Engine # Source: Whitelist Robots by Search Engine | IP2Location Code:
whois -h v4.whois.cymru.com " -c -p 183.131.32.0/20" https://github.com/arineng/nicinfo that will give you full RDAP/whois information Third way is just $ whois <ip address> If you are generating many dynamic pages search engines may be causing this problem Scrapers and *bad bots* may be the issue too. This is what server logs are for to search for problems and find patterns. A firewall is the way to go -- just do not answer -- drop the packet. |
But Holy cow :angrysoap
I was away 2 days and the server was invaded by bots, just like you said... http://porn-update.com/temp/Schermat...2000-34-49.png Code:
51.255.65.66 - - [16/Oct/2017:22:25:31 +0000] "GET /27 HTTP/1.1" 302 3634 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.2; +http://ahrefs.com/robot/)" Thanks, just in time |
So, I limited Bing from robots.txt on all my sites. For now I see no big differences, but maybe it takes a little because of the cache
I also found in my htaccess, these rules that should stop Yandex and China Code:
RewriteCond %{HTTP_USER_AGENT} ^.*MJ12bot [NC,OR] Then I went to see Ip2location, I joined and I generated the file, but I did not understand how to use the file that they gave me... I have generated Linux iptables, they gave me a thing like this: Code:
iptables -A INPUT -s 104.146.0.0/18 -j DROP Do I need to install iptables? Will it still work UFW? I have seen some sites where it says to open a configuration file of UFW and add the lines, but my lines have a different format and the files to be modified indicated in these sites are always different... I also thought of changing manually this: Code:
iptables -A INPUT -s 104.146.100.0/22 -j DROP Code:
# block IP But I'm not sure that doing this manually is a good idea I'm not really understanding anything. And I would not use the rules in httacces, first because here also have a different format from what I used in precedence, and then because they are so many... Do I need to install Fail2ban? I really need to fix this thing quickly because my server is merging, can you help me? |
Code:
ufw deny from 192.187.100.58 to any; Code:
root@ds12-ams-2gb:/home/work# ufw status numbered Code:
root@ds12-ams-2gb:/home/work# ufw delete 37 Mapping the rules is a better idea but I haven't seen a good solution for ufw only for iptables and now nftables ufw is an acronym for Uncomplicated FireWall UFW: The Linux Uncomplicated Firewall <uncomplicated tutorial iptables is sort of hard to understand and has been superseded by https://linux-audit.com/nftables-beg...fic-filtering/ <nftables Baidu doesn't play by the rules regarding robits.txt and will use IP to spider you without any user-agent sig that says 'baidu' making you .htaccess code useless. get the ip CIDRs and block them in the ufw firewall. |
So, I downloaded the CIDR of the engines that I want to block
http://porn-update.com/temp/Schermat...2001-39-21.png and launched this: Code:
while read line; do sudo ufw insert 1 deny from $line to any; done < cdir.txt But in the access.log the ones I see most often are: Opensiteexplorer.org/dotbot, [email protected] semrush.com/bot.html bing.com/bingbot.htm ahrefs.com/robot/ Apart from Bing, the rest seem to be marketing tools, some more or less connected to Google or moz.com I don't use them, but mostly I don't need them if they first destroyed my server... Can I block them? Always via IP-UFW? And in case, which IP should I block? Their Ip in my access.log changes, eg: Code:
46.229.168.76 - - [17/Oct/2017:23:04:53 +0000] "GET /search-busty%20mom%20loves%20to%20suck%20cock/ HTTP/1.1" 200 24789 "-" "Mozilla/5.0 (compatible; SemrushBot/1.2~bl; +http://www.semrush.com/bot.html)" Or is it better in this case to use robots.txt? Is there a serious list (robots.txt or IP) of bad bots to block? |
So, in the meantime... I don't know if I made a crap...
But I did this thing... I searched with grep some bots in the acces.log Code:
grep ahrefs/var/log/apache2/access.log Then I copied a few thousand lines and I created a PHP script that creates a file with only the IP, line by line (leaving the duplicates) Code:
<? http://porn-update.com/temp/bad-bot-cidr.txt Then with the usual while I added the rules to UFW Code:
while read line; do sudo ufw insert 1 deny from $line to any; done < /var/www/html/bad-bot-cidr.txt Code:
while read line; do sudo ufw delete deny from $line; done < /var/www/html/bad-bot-cidr.txt The CPU graph goes up and down at the moment, but that of load is slowly descending. http://porn-update.com/temp/Schermat...2002-02-54.png I wait a little, and I see what happens... |
If the only tool in your toolbox is a hammer that is how you screw in a wood screw ...
try this: Code:
$ cut -d'-' -f1 /home/work/domain.com/logs/access.log| grep -v '99\.3' | uniq -c | sort -nr|sed 's/\([0-9]\) \([0-9]\)/\1:\2/g' |less instead of |less >fileName.* you don't need a hammer to tighten a screw :) After checking the whois Quote:
Code:
ufw deny from 173.208.249.224/29 to any; http://www.technology-training.co.uk...CIDR_large.gif |
I thought to a fast way to get an IP CIDR
Code:
$ whois 173.208.249.226 |grep 'CIDR:'|cut -d':' -f2|sed -e 's/^/ufw deny from /g' -e 's/ / /g' -e 's/$/ to any;/g' :) |
So, something I did...
Now virtually are only the Google and Bing bots in my access.log. But CPU and load are still at absurd levels... The strangest thing, something has changed in both my servers....:disgust http://porn-update.com/temp/Schermat...2002-38-04.png http://porn-update.com/temp/Schermat...2002-38-29.png This week the visits have not doubled (indeed, they have fallen a bit), but something is obviously changed, but I have no idea what it is... I have not changed anything, I searched in all the logs I know, but I find nothing that can explain this sudden increase in CPU usage. In the error logs I often find lines like this: Code:
[Fri Oct 20 01:00:20.885615 2017] [core:error] [pid 4771] [client 37.9.113.202:36406] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. These days I have been trying to look also often at the sites and seem to work well. Nixstat often says that the mysqld process is often at 130%, 150%, 180%, my sites definitely make an important use of MySQL, but having not changed anything, I do not understand why. (It was very high when I had problems importing tables, but it had normalized after having them fixed.) Not having increased the visits, and having eliminated many of the bots, who or what is using my CPU and my mysql? I don't know what else to do to understand what's going on...:helpme I'm kinda worried also because usually the visits increase on Saturday and Sunday, and I have no idea what will happen this week |
https://www.google.com/search?client...utf-8&oe=utf-8
https://gist.github.com/JustThomas/141ebe0764d43188d4f2 I usually try searching the exact error to get some idea -- seems this may be some .htaccess rewrite errors ... |
t's the first thing I always do, I ask here when I find nothing
I found a lot of solutions for wordpres, but my sites are not WordPress, and I understand very little url_rewriter... I wrote these rules a long time ago, following guides, and I never saw this error before, until I started to manage my server... My url_revriter is really simple and stupid, and the error does not give many clues to understand what creates it. Code:
RewriteEngine On But I also found another thing... Last week Google decided to scan my sites, all together, and a lot of pages of each site... Now I'm thinking, I don't know if... Wait a few days, maybe, when Google has finished this scan the server returns to a normal regime. Or Try to limit its consumption by Google, adding for example if-midified-since and Last-Modified in the headers of my pages There were already, but time ago I had to comment on why they created problems with some crappy VPS |
I had a problem with the new Apache2 version then changed to Nginx for other reasons.
Code:
</IfModule> |
But where?
I remember you had already shown me this thing, but for some strange reason in my .conf files there is not... Where should I add, in the files mysitecom.conf? In sites-enabled or sites-available? or both? Then I have to disable and enable the sites? Restart Apache clearly, but I have to reboot the server too? I want to fix this, because my logs are still full of those errors, Perhaps there is some new hope, in the last few hours something is changing, as it started, it seems perhaps to return to normal... http://porn-update.com/temp/Schermat...2001-55-24.png http://porn-update.com/temp/Schermat...2001-55-55.png |
Code:
cd /etc/apache2/sites-available then ###### #then make symbolic link Code:
a2ensite <file> or service apache2 reload reload as opposed to restart this just reloads the new configuration. |
So, I made the changes on all my domains, then I waited some days, because here things never seem to happen right away... maybe because of the various caches but it seems to me always want a few days to see the changes actually applied.
(then I waited a few more days due to an flu and fever, balls) The errors unfortunately did not go away, indeed, I find some very strange, like this: Code:
[Wed Nov 01 01:28:37.585964 2017] [core:error] [pid 31598] [client 113.200.85.109:36600] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: https://m.baidu.com/from=1005640b/bd_page_type=1/ssid=0/uid=0/pu=cuid%40Yiv_ugul2ill82uxgaBWiguwHtY5iHue_u2b8_uh2iqMuHi3A%2Cosname%40baidubrowser%2Ccua%40_a2St4aQ2igyNm65fI2UN_aLXioouvhBmmNqA%2Ccut%4099mjqk4iv9grivh5gaXliouRL8_4h2NlgNEHjA4msqqSB%2Ccsrc%40ios_box_suggest_txt%2Cctv%402%2Ccfrom%401005640b%2Cbbi%40ga2Mijah2uz3uSf3lh2ti_O3sf0kuSNT0uvo8guESilSu2iuA%2Ccen%40cuid_cua_cut_bbi%2Csz%401320_2004%2Cta%40iphone_1_11.0_5_4.10%2Cusm%401/baiduid=242780940BA28A3DDDE364D833464EE4/w=0_10_/t=iphone/l=1/tc?ref=www_iphone&lid=11746036396691745334&order=3&fm=alop&tj=www_normal_3_0_10_title&url_mf_score=3&vit=osres&m=8&cltj=cloud_title&asres=1&title=JapanesePornUpdates%7CJapanese%2CAsian%2CExotic...&dict=32&w_qd=IlPT2AEptyoA_yiiC6SnGjEtwQ4INvD8&sec=25105&di=2c8f31196ae39669&bdenc=1&tch=124.167.24.701.1.0&nsrc=IlPT2AEptyoA_yixCFOxXnANedT62v3IEQGG_yRZAje5mFqyavvxHcFqZj0bNWjMIEb9gTCc&eqid=a3024dde9f9ee8001000000359f92319&wd=&clk_info=%7B%22srcid%22%3A%221599%22%2C%22tplname%22%3A%22www_normal%22%2C%22t%22%3A1509499715089%2C%22sig%22%3A%2241388%22%2C%22xpath%22%3A%22div-a-h3%22%7D The CPUs continue to come and go, in the last 15 days, have behaved in a really odd way, it seems that my sites to work regularly need about 20%, then when they arrive the search engines would need another 10 servers... do not know if there is something that It does not work, but in case, it does not work only with search engines because the sites work very well. Visits have not undergone major increases or losses. are more or less stable. http://porn-update.com/temp/Schermat...2003-31-01.png http://porn-update.com/temp/Schermat...2003-31-21.png In the access_log I see very google and especially much Bing even though I added delay=1 in robots and reduced time and scan frequency in Bing Webmaster At this point I'm kinda confused about what to do (maybe even because of the flu) and I have no clear if everything works great and if there is something that really does not work. Suggestions to understand how the situation actually is and if I can do something to improve it? |
1 sec try =5 (5 sec)
Run it until it croaks -- Who wrote that PHP script that is leaking (possibly)? whois 113.200.85.109 CHINA ofc |
Sorry, this time I did not really understand what I have to do, maybe I'm still kinda feels from the flu...
I wrote every single line of code of all my sites, my rule is to write code in the simple possible way, to avoid as many problems as possible. I can't figure out what creates problems and how to fix them Sorry, I'm not at my best these days... |
Do I have to activate/set something on the server in order to use these?
PHP Code:
|
IDK but if you make a script and run it -- then look at the browser w/ firefox live HTTP Headers add on; or,
curl your test page URL Code:
barry@paragon-DS-7:~$ curl -X HEAD -i 'https://gfy.com/' On the SEO Q?: If your content is not changed then you would be gaming Googlebot and the other indexing bots ... |
Did not read the whole 4 pages, but:
- Any slow DB queries, or intensive queries that are being executed together? Enable the slow query log and check what will happen. If there are slow queries that mark the start of the overloads, are they malformed queries or are they regular ones? If they are a result of the overload and not the cause, they would pop up after the CPU spikes, so then what is PHP doing at the same time? - Is the DB itself optimised, with indexes and everything? - Is disk I/O out of the question or is there a significant disk wait during the time when the CPU hits 100%, or shortly before that moment? - What does "top" show when this happens? Is PHP spiking its CPU usage or is MySQL showing on top? My opinion is that you should hire someone and get them to profile the whole thing if that is possible, as the graphs above don't provide any clues. And I don't think that you have a problem with bots either. |
If it was a hardware issue digital ocean would be aware of it. You are using a VPS cloud so you are not an isolated user with dedicated hardware.
You are having the same sort of problems you were having with your shared hosting. Code:
root@paragon-DS-7:/var/log/mysql# zcat error.log.4.gz check your PHP code. Code:
$ php file.php > file_name_warnings.txt |
So, I think we need a fuller view.
In the other VPS is hard to say, the sites could not stay online 24 hours below... However, in Digitalocean I have 2 servers, one in NY and the other in Bangalore, but they practically go into crisis together. The error that I find most often in the logs is always that of too many redirects, it is not gone and I do not know what else to do, but it is an error that there is always, not only when the server goes into crisis, but also when the CPU works at 20-30%. The page with most visits in all my sites is the internal search engine, about 80-90% of the visits and a MySQL table that also contains about 80-90% of the content of the site, in this case about 400,000 search tags, about 5000 pages of content. This is the structure of the table: Code:
-- Struttura della tabella `tgp_search` Code:
SELECT query, MATCH(query) AGAINST('tits') as score_search FROM tgp_search WHERE MATCH(query) AGAINST('tits') ORDER BY views, score_search DESC LIMIT 25 Code:
SELECT SQL_CALC_FOUND_ROWS id_photo, title, description, url, img, MATCH(title, description) AGAINST('tits') as score FROM tgp_photo WHERE MATCH(title, description) AGAINST('tits') ORDER BY score DESC LIMIT 0, 66 Compared to a simple query like: LIKE '% boobs% ' I have made complete PDFs of the nixstat statistics of the last 15 days, including everything. I/O is used very little, as well as the disk, the RAM is little, but we only have that, the most active are CPU and MySQL process. ubuntu-1gb-nyc3-01-NIXStats.pdf ubuntu-2gb-blr1-14-04-3-NIXStats.pdf Obvious that the site uses heavily MySQL, when the CPU goes to 100% the MySQL process goes to 150-200% (although I do not understand how it is possible :eek7) I activated the log on long queries, about 4 hours ago, for now it is empty. I almost thought I'd try to block Bing for a few days, just to see what happens, and if it can be his fault, but maybe it's a stupid idea. I'm very undecided on whether or not to use the last-mod and if-modified-since headers, because I'm always afraid that once you set, big G you don't consider it anymore. Anyway, now my sites are online 24 hours a day, seem fast and work without problems even when the server goes into crisis. |
So, these days I tried, studied, tested, but htaccess and url_rewriting I understand very little...
This error is still there... there is always... Code:
[Fri Nov 10 21:22:42.691073 2017] [core:error] [pid 23733] [client 95.108.129.196:58013] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I did not use the structure indicated in the tutorials "var/www/html/site.com/public_html I public_html never liked it, so I ignored it, in site.com, there are files of my sites. http://porn-update.com/temp/Schermat...2022-07-15.png All my htaccess start with this line Code:
RewriteBase / A virtually identical error occurs with WordPress multisite (which I have never used and I do not know) in Apache 2.4 The solution seem to be this couple of lines: Code:
RewriteCond %{REQUEST_FILENAME} !-d Any ideas to help? P.S. other thing, can I run in a single site a different version of PHP, generally php7, and in a single site php5? |
mod_rewrite - Apache HTTP Server Version 2.4
Quote:
! -d not a directory ! -f not a file ! is a negation = equal to != not equal to |
I don't think it's my solution...
Of this there is nothing physically on the server, it is all url_rewriting Code:
http://www.alternativegirlshardpics.com/search-boobs/ Code:
http://www.alternativegirlshardpics.com/search.php?query=boobs I begin to have also some doubt that the problem is URL rewriter On the same page, there is also this: Code:
tail -f error_log|fgrep '[rewrite:' Today I decided to download the log files locally, the result is 0, not found On the weekend I also tried to block Bing via IP list downloaded from ip2location http://porn-update.com/temp/Schermat...2022-30-58.png The CPU behavior hasn't changed much, it's not him the problem |
cd to the path for error_log
Code:
$ tac error_log|grep 'rewrite:'|cat -n|less |
The answer is empty... nothing... only: (END)
Very strange... :uhoh there seem to be no errors with url_rewriter, but looking in the search engines the redirect error, the only solution I find refers to the correction of an error in htaccess in WordPress on Url_rewriter of the multisite... I'm really confused :eek7 |
Sometimes strange things happen...
For example, I deleted a site, I disabled it, I deleted files and databases and deleted the virtual host files, but I forgot to change the DNS... All site requests ended on the first site (in alphabetical order) on my server... Now I have also changed the DNS, but I keep seeing errors like this: Code:
[Thu Nov 16 04:15:35.503536 2017] [:error] [pid 4988] [client 66.249.64.128:48685] script '/var/www/html/mysite.com/search.php' not found or unable to stat But I find errors even stranger, like this: Code:
[Thu Nov 16 04:16:00.637321 2017] [:error] [pid 4835] [client 199.59.91.34:49152] script '/var/www/html/mysite.com/status.php' not found or unable to stat Sometimes I also often see errors related to wordpress, errors related to directors or files type wp-admin or wp-login, but there is no wordpress on my servers. Code:
[Thu Nov 16 04:43:18.759137 2017] [:error] [pid 7967] [client 120.28.68.192:64647] script '/var/www/html/veryhardsexupdates.com/wp-login.php' not found or unable to stat :eek7:eek2:eek7 |
Code:
barry@paragon-DS-7:~$ host 66.249.64.128 So there is a 404 -- so what? You could set up Permanent Redirect Quote:
|
Something really weird is happening...
Today I got an AdSense alert with this URL Quote:
The structure with subdomains (eg. analvideoupdates.bigbigbigboobs.com) existed in the old VPS where Bigbigbigboobs.com was the "main domain" and for each domain added was created a sub-domain to the main domain. I have absolutely no idea why, or what they used to do, but that idiot of CPanel created them... I have never indexed them or used them, but for some strange reason the search engines seem to know that they exist. The thing that worries me now is that every time I changed VPS the "main domain" changed, not to put off line all the sites, it was created a new one and moved everything quietly. Now I have no idea how many of these subdomains have been created and under what "main domains"... For now I have temporarily solved the situation by moving a site, now even the first is a porn website. I do not think I can do everything with Apache alias, I thought I would create a script that intercepts the refer (including domain, subdomain and query string) and redirect based on the sub-domain, in the right site. |
If you are redirecting 404's DON'T
Better the links that do not exist dead end and get deindexed. |
Interesting Topic :3
|
Quote:
I have this in all my httaccess and a 404 page in all my sites (e.g. 404 Page Not Found | Alternative Girls Hard Pics) Code:
ErrorDocument 404 /404.php Quote:
I thought I would retrieve something with these 2 sites (one per server, first site in alphabetical order). Adult Hashtag Adult Sex Search This type of website saved searches from the query string... at least recovery the keywords of the search are read by search engines on all those sub-domains. What do you think? stupid idea? :disgust |
Your 404.php is OK it say 'page not found'
I meant 404 (302)-> index.php |
Permissions are correct if practically everything in /html directory has these?
Code:
-rwxr-xr-x www-data www-data |
generally it is either
user:www-data www-data:www-data if you don't want to put files sftp or ftp |
Maybe I found the bastard who's cloning my sites.
Right now my server is at 100% CPU, 100% memory, 50 load, sites practically offline. In access.log I see a lot of access from this IP 93.105.187.11, Code:
93.105.187.11 - - [03/Dec/2017:02:10:52 +0000] "GET /page-12/search-mooty+mooty+boobs+dhod+figar+onley+fack+pick+full+screen+hd/random/ HTTP/1.1" 200 25269 "http://www.monsterboobshardpics.com/page-9/search-mooty+mooty+boobs+dhod+figar+onley+fack+pick+full+screen+hd/random/" "Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0" I tried to block it with this: ufw deny from 93.105.187.11 to any; But I still see access from this IP... am I doing something wrong? Why isn't it blocked? The firewall is enabled and I have also restarted the server. |
Now also my second server started to go into crisis and I find the same IP here too
Code:
93.105.187.11 - - [03/Dec/2017:02:35:43 +0000] "GET /page-23/search-pics+gigantomastia+granny+tits/ HTTP/1.1" 200 24056 "http://www.bigboobsupdate.com/page-26/search-pics+gigantomastia+granny+tits/" "Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0" |
Quote:
|
I don't have allow rules, just apache, postfix and ssh...
My UFW is really weird, if I still try: Code:
ufw deny from 93.105.187.11 to any; And this IP continues to pick up content from my server... UFW is not doing anything... I saw that Digitalocean can apply a firewall on the droplet (Digitalocean Cloud firewalls, https://www.digitalocean.com/communi...loud-firewalls) Whose limits are: "Total incoming and outgoing rules per firewall: 50" I do not use it because I trust to have UFW of Ubuntu, but maybe the firewall of Digitalocean also has effect on UFW? Only reads the first 50 rules? (I have 534 now, it really would be a crap if it works like that...) |
As a last hope, I tried to launch this:
Code:
iptables -I INPUT -s 93.105.187.11 -j DROP http://porn-update.com/temp/Schermat...2000-48-30.png But I have no idea how iptable works and from what I understand if I reboot the server everything resets... Can anyone help me figure out how to properly set up iptable or how to make it work properly UFW? |
So... these days I studied a little iptables (very little).
I took a little courage... I launched sudo ufw reset and deleted all the rules of the server firewall. Then I re-applied the basic rules both on UFW and on Iptables. I found out how to block an IP and a list of IP with this script: Aggiungere regole a iptables da lista IP e renderla persistente and added all the IPs that I want to block (regional, Baidu, Yandex, bad spider etc). I installed iptables-persistent saved and restarted the server... Aaaand I made a mess...:disgust The sites seem to be online and work well, but initially I did not receive mail from the server (eg those of the cronjob) and I did not see the stats of Nixstat. With some adjustment nixstat now works and some mail has arrived, but I'm afraid I have done some other mess because the statistics of the server with Ubuntu 14.04 in particular (my other server is 16.04) have fallen drastically these days, or at least so they say statistics services... but maybe I blocked the access of Yandex metrics and analytics to the server? I don't know...:Oh crap Today after some modification and reboot when I opened the rules.v4 file I found it empty... Did I miss something? I don't really know what I'm doing...:helpme This is my current rules. V4, something missing? Is there anything too much? http://porn-update.com/temp/rules.v4 |
All times are GMT -7. The time now is 07:07 AM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc