![]() |
htpasswd Problem
i've been having an issue with this for several years now and i'm curious if other people are experiencing the same thing or not.
each month i have to get an updated htpasswd file from ccbill and when i compare it to the current file on my server there is always 40-50 members left on it. i've talked to several techs at ccbill and can't seem to get a straight answer why this is happening. i was originally told that sometimes their script (for whatever reason) can't write to the htpasswd file so the member's login doesn't get removed. if this is the case, does that mean that 40-50 new members can't get access each month because the script failed to add them to the htpasswd file? |
Please contact me. I would like to look into this for you. Thanks.
|
ccbill's user management script has always had massive problems. i've proved this several times on several different sites using the logs their script writes. it is a very unstable script.
the problem is that it performs several actions when removing or updating a record. for example, on a rebill, the script will delete a username and then reconnect to re-add the username. if something happens, such as the file is momentarily locked by the server, one of those 2 actions could fail. their script does not verify that their actions were successful. if you have problems, you are better off using the datalink and relying on your own password management. otherwise expect to spend some time every month auditing your member list. |
thanks for the reply, hopefully they can fix this.
|
Quote:
In Perl, file locking works better if you get a lock on another file instead of the real file. example : open(AA, ">lock$filename"); flock(AA,2) or die; open(BB, ">$filename"); print BB $something; close(BB); flock(AA,8) or die; * Instead of "die" you can write an error routine(maybe sleep 1 then try again) The above is better than this below : open(BB, ">$filename"); flock(BB,2); print BB $something; flock(BB,8); close(BB); This causes a problem when two surfers hit the script at the same time because one of the surfers will get "file locked". BUTTTTT!!! Perl will not wait for the "unlock" and prints nothing to the file. I would say that the CCBill script probably gets a "file already locked" but does not die or perform error routine or wait. The script continues on but cannot write to the file. The script opens the passwd file as "read/write" and this is the only reason that the entire passwrd file is not erased when you have this problem. That's why in my first example I get a lock on a different file named "lock$filename"; that way I don't delete the file when it is opened to write but the lock failed. In CCBill script maybe a change like this will stop that : sub lock($) { no strict "vars"; my $fh = shift; flock($fh, $LOCK_EX); } Change above to this : sub lock($) { no strict "vars"; my $fh = shift; flock($fh, $LOCK_EX) or &pleasewait($); } sub pleasewait($) { sleep(1); no strict "vars"; my $fh = shift; flock($fh, $LOCK_EX) or &pleasewait; } Disclaimer : If all hell breaks loose then I was not here. :) Somebody test this, I have other stuff to do right now. |
Quote:
file reading could be locking it or just that their script hits twice within a second on a rebill. i've seen big problems on passfiles with over 500 members. takes a second for the server to open a file that big, then read, and save. two pings in a row, one will fail. i always go to datalink when a site gets that big. ccbill should be the one testing this. their script hasn't changed in a decade. |
All times are GMT -7. The time now is 07:01 AM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123