Quote:
Originally posted by Nbritte
...
sub lock2 {
$lockf = @_[0];
local($flag) = 0;
foreach (1 .. 5) {
if (-e $lockf) { sleep(1); }
else {
open(LOCK,">$lockf");
close(LOCK);
$flag = 1;
last;
}
}
if ($flag == 0) { &error("Lock file busy"); }
}
|
This leads to what is known as a "race condition".
See, the problem is that the operating system is doing time slices for a number (possibly hundreds!) of processes at any given moment. In a high level language, such as perl, there is no way for you to either determine nor control when one process will be suspended and another get control of the CPU.
For locking to be effective, where one and only one process will be able to complete a code section before another can start into it, an "atomic action" is needed. That is, a single, undividable command must be executed to lock that section. Anything less will be a heartbreaker, and a lot sooner than you might expect.
However, in the example you give, Process A may fail to find $lockf but be immediately be suspended. While it sits waiting to resume execution, Process B arrives at the same point and it, too, fails to find $lockf. It then sets the lock and reads half of the data file and gets suspended. Process A then gets the CPU and proceeds to set a lock that's already set (no error there!), reads the data file in, writes half of it out and gets suspended. Back to Process B, which reawakens to read in the rest of that data file. Only now, it's file pointers are invalid, because Process A was interrupted in the middle of writing to it!
Result: You've just lost your data file. It has been corrupted beyond recovery.
Don't try to roll your own in software! The "flock" command is really one of the very few ways that you check for and make a flag if it isn't there, both at the same time. The only example I've ever seen that worked properly in perl was by Earl Hood in an early version of MHonarc. And that was one strange bit of coding...