GoFuckYourself.com - Adult Webmaster Forum

GoFuckYourself.com - Adult Webmaster Forum (https://gfy.com/index.php)
-   Fucking Around & Business Discussion (https://gfy.com/forumdisplay.php?f=26)
-   -   Perl code (https://gfy.com/showthread.php?t=73527)

kenny 08-23-2002 04:07 PM

Perl code
 
When sending data to a log how can I make the newest entry take the top line and push the old entries down? Instead of the new entry being added to the bottom of the list.

NetRodent 08-23-2002 04:17 PM

Here's one way to do it, although it is much more resource intensive to do it this way as opposed to simply appending the listing at the end:

$newdata = 'whatever';

local $/ = undef;
open(IN, "<$file");
$olddata = &ltIN&gt;
close IN;

open(OUT, ">$file");
print $newdata."\n"
print $olddate;
close OUT;

Babaganoosh 08-23-2002 04:21 PM

What exactly are you doing? When you actually access the data, you can just reverse the array and print lines in the reverse order. That may not be the best way for every application, but it's generally what I do.

http://www-2.cs.cmu.edu/People/rgs/pl-exp-arr.html

kenny 08-23-2002 05:02 PM

I am writting user submitted links to a .txt file and was including the .txt file in a html page with the include file ssi command. I wanted the newest submissions to appear first

kenny 08-23-2002 05:07 PM

I quess I can use a execute cgi ssi command and write a small script to reverse the order of the log file. I am not much of a programmer but I can figure out ways to do things even if they arent the best way:glugglug

Babaganoosh 08-23-2002 05:07 PM

Yep, in that case I would just use reverse.

fiveyes 08-23-2002 06:22 PM

Ahhh, perl!

Armed & Hammered's suggestion works but I don't feel it's that good. The reversal of data would require a SSI call to a program to do it and whatever savings in adding an item would be more than offset in the overhead involved everytime the document is viewed. This holds unless you actually have more people adding data than simply viewing the listing, which is unlikely.

NetRodent's response is better, in that it concentrates the resource usage to only when adding a new listing and then simply echos the list when it's viewed. However, the "print" statements are not going to the file handle he opened (OOPS!), one should always check for the failure of file operations and, most importantly, there has to be a way of locking the code in a CGI enviroment when you're doing file operations. Otherwise, you run the risk of two seperate processes colliding and the list will be corrupted from that point forward.

Code:

    $file = '/path/to/list.txt';

    open(FILE, "+< $file") or die "can't open $file: $!";
    flock(FILE, 2)

    @lines = &ltFILE>;

    # Write the new addition first
    (print FILE "$newdata\n") or die "can't write to $file: $!";
    # Copy original to new
    while (@lines) {
        (print FILE $_) or die "can't write to $file: $!";
    }

    # No need to explicitly unlock,
    #  "close" does it for us
    close(FILE) or die "can't close $file: $!";


richard 08-23-2002 06:42 PM

Quote:

Originally posted by fiveyes
<snip>This holds unless you actually have more people adding data than simply viewing the listing, which is unlikely.
Is a good point.

Nbritte 08-23-2002 07:08 PM

read the file to @data

unshift(@data,$newline);

then save @data

Brian

Nbritte 08-23-2002 07:21 PM

ok here is the long version
&lock2("filename.lock");# lock the file so no one can write while processing data
@data = &readfile("filename");#read the data from the file
unshift(@data,$newline);# shift the array down and add new line to top
&writefile("filename",@data);#write the data back to the file;
if (-e "filename.lock") {unlink("filename.lock")}#remove the lock file so other processes can use it
sub readfile {
$fname = $_[0];
open(INF,"<$fname") || &error("unable to open $fname : $!");
@data = <INF>;
close(INF);
return @data;
}
sub writefile {
($fname,@data) = @_;
open(OUTF,">$fname") || &error("unable to open $fname : $!");
foreach $data (@data) {
chomp($data);
print OUTF"$data\n"; }
close(OUTF);
}
sub lock2 {
$lockf = @_[0];
local($flag) = 0;
foreach (1 .. 5) {
if (-e $lockf) { sleep(1); }
else {
open(LOCK,">$lockf");
close(LOCK);
$flag = 1;
last;
}
}
if ($flag == 0) { &error("Lock file busy"); }
}


Brian

fiveyes 08-23-2002 08:34 PM

Quote:

Originally posted by Nbritte
...
sub lock2 {
$lockf = @_[0];
local($flag) = 0;
foreach (1 .. 5) {
if (-e $lockf) { sleep(1); }
else {
open(LOCK,">$lockf");
close(LOCK);
$flag = 1;
last;
}
}
if ($flag == 0) { &error("Lock file busy"); }
}

This leads to what is known as a "race condition".

See, the problem is that the operating system is doing time slices for a number (possibly hundreds!) of processes at any given moment. In a high level language, such as perl, there is no way for you to either determine nor control when one process will be suspended and another get control of the CPU.

For locking to be effective, where one and only one process will be able to complete a code section before another can start into it, an "atomic action" is needed. That is, a single, undividable command must be executed to lock that section. Anything less will be a heartbreaker, and a lot sooner than you might expect.

However, in the example you give, Process A may fail to find $lockf but be immediately be suspended. While it sits waiting to resume execution, Process B arrives at the same point and it, too, fails to find $lockf. It then sets the lock and reads half of the data file and gets suspended. Process A then gets the CPU and proceeds to set a lock that's already set (no error there!), reads the data file in, writes half of it out and gets suspended. Back to Process B, which reawakens to read in the rest of that data file. Only now, it's file pointers are invalid, because Process A was interrupted in the middle of writing to it!

Result: You've just lost your data file. It has been corrupted beyond recovery.

Don't try to roll your own in software! The "flock" command is really one of the very few ways that you check for and make a flag if it isn't there, both at the same time. The only example I've ever seen that worked properly in perl was by Earl Hood in an early version of MHonarc. And that was one strange bit of coding...

fiveyes 08-23-2002 08:56 PM

For those interested in that "strange bit of coding" I refered to above:
http://groups.google.com/groups?selm...&output=gplain

mrthumbs 08-23-2002 08:58 PM

Quote:

Originally posted by fiveyes


This leads to what is known as a "race condition".

See, the problem is that the operating system is doing time slices for a number (possibly hundreds!) of processes at any given moment. In a high level language, such as perl, there is no way for you to either determine nor control when one process will be suspended and another get control of the CPU.

For locking to be effective, where one and only one process will be able to complete a code section before another can start into it, an "atomic action" is needed. That is, a single, undividable command must be executed to lock that section. Anything less will be a heartbreaker, and a lot sooner than you might expect.

However, in the example you give, Process A may fail to find $lockf but be immediately be suspended. While it sits waiting to resume execution, Process B arrives at the same point and it, too, fails to find $lockf. It then sets the lock and reads half of the data file and gets suspended. Process A then gets the CPU and proceeds to set a lock that's already set (no error there!), reads the data file in, writes half of it out and gets suspended. Back to Process B, which reawakens to read in the rest of that data file. Only now, it's file pointers are invalid, because Process A was interrupted in the middle of writing to it!

Result: You've just lost your data file. It has been corrupted beyond recovery.

Don't try to roll your own in software! The "flock" command is really one of the very few ways that you check for and make a flag if it isn't there, both at the same time. The only example I've ever seen that worked properly in perl was by Earl Hood in an early version of MHonarc. And that was one strange bit of coding...

holy shit man.......

posh rat in hell 08-24-2002 06:17 AM

The only safe way to do this, is append to the end, and then later sort it the other way around.

FuqALot 08-24-2002 06:41 AM

Yo,

$file = "x.txt";
open(logfile,"+&lt;".$file);
flock logfile, 2;
@logdata = &lt;logfile&gt;;
seek logfile, 0, 0;
print logfile $newline."\n";
print logfile @logdata;
truncate logfile, (tell logfile);
close(logfile);

is the way. Where $newline is the entry, and $file the logfile.

fiveyes 08-24-2002 08:05 AM

FuqALot -
Good to see another perlhead! The best way to learn perl programming is to write programs, the second- to butt heads with othres...

The statement "seek logfile, 0, 0;" does nothing. Why do I say that? Because you have just read the entire file into memory. From this point on, the FILEHANDLE,"logfile", is used to write the new information to. As soon as the first "print" is attempted, the original file is truncated to nothing; it exists, but is 0 bytes in length. No need to find the start of it, it's the same as the end. No need to account for another process manipulating the file in the meantime either, we've locked them out.

Same applies for the statement "truncate logfile, (tell logfile);". Here you are setting the end of the file to where the end should be after you've finished writing- nothing more, nothing less. It can't be anything else because:

As long as all other processes that might modify the file in question use the same locking mechanism to access the file (and if they don't, you're already lost!), the file pointers (beginning, end and where you are within it) will be valid and remain unchanged without having to be explicitly set. The lock which you have (effectively) set in the beginning of this code sequence ensures that you (a specific instantiation of a CGI process), and only you, have access to this code section. If the lock is effective, those pointers cannot change unless you say otherwise, which really is the reason for locking the file in the first place. If the lock is ineffective (where another process could interrupt your actions and modify the file outside of a process in question), there is no "seek", "tell" or "truncate" that will rectify the situation.

Once a file handle is corrupted, it's lost for good.

fiveyes 08-24-2002 08:26 AM

Quote:

Originally posted by posh rat in hell
The only safe way to do this, is append to the end, and then later sort it the other way around.
Oh! Please, tell us exactly what is "unsafe" in putting the line at the beginning of the file and then just reading it out, as is, when needed.

Are you afraid you might run out of room in your lower disk space?

Fletch XXX 08-24-2002 08:34 AM

Proud to have 5 eyes representing The Boot. (NOLA)

Nice.

Babaganoosh 08-24-2002 08:52 AM

http://www.tgpdevil.com/reverse.txt

Really, what is wrong with reverse? You can do it in a single line of code? There is absolutely nothing wrong with doing this. It's more efficient than any of the examples I have seen so far.

Had to edit since the board didn't like my file handles.

FuqALot 08-24-2002 09:04 AM

Quote:

Originally posted by fiveyes
The statement "seek logfile, 0, 0;" does nothing.
I don't agree with that.
First, the example you gave with your first post didn't really get through my compiler ;), that's why I gave another example.

If you drop the seek then it doesn't know where to write, you can try it -- in other words: it won't write anything, :-).
You're right about the truncate though. -- It's not necessarily and I wouldn't include it if speed was a must, but it isn't now.

What I gave is a standard code I've always use for things like this, and it sure works :-).

fiveyes 08-24-2002 09:29 AM

Armed & Hammered-
Your suggestion works, as I said in the beginning. But, let's disregard the fact that the data might be accessed more often than it's updated, which would make your suggestion just a wee, tad bit cumbersome.

The fact remains that the OP'sOQ was:
"When sending data to a log how can I make the newest entry take the top line and push the old entries down? Instead of the new entry being added to the bottom of the list."

I really don't know why he wants to do this. But it was pretty clear, at least to me, that's really and truly what he wants to do.

Now, posh rat in hell might speak up at any moment and prove how rewriting a file with a new line at the beginning of it is somehow less safe than rewriting it at the end. But until then, can we just consider the OP'sOQ answered?

FuqALot 08-24-2002 09:37 AM

Quote:

Originally posted by fiveyes
Now, posh rat in hell might speak up at any moment and prove how rewriting a file with a new line at the beginning of it is somehow less safe than rewriting it at the end.
Indeed. Heh.

fiveyes 08-24-2002 09:52 AM

Quote:

Originally posted by FuqALot


I don't agree with {My statement that "seek logfile, 0, 0;" did nothing}

I sit at the computer corrected. As http://www.rocketaware.com/perl/perlfunc/seek.htm clearly tells me- "On some systems you have to do a seek whenever you switch between reading and writing." Though I'm certain I've read this before, I had overlooked it.

Whether the code runs with or without it on my system, it should definitely be included. If it's unneeded, it does no harm. However, if it is needed, it had better be there!

Thanks for the heads up.

kenny 08-24-2002 11:23 AM

Quote:

Originally posted by FuqALot
Yo,

$file = "x.txt";
open(logfile,"+&lt;".$file);
flock logfile, 2;
@logdata = &lt;logfile&gt;;
seek logfile, 0, 0;
print logfile $newline."\n";
print logfile @logdata;
truncate logfile, (tell logfile);
close(logfile);

is the way. Where $newline is the entry, and $file the logfile.


open(LOG,"+<".$log);
flock LOG, 2;
@logdata = <log>;
seek LOG, 0, 0;
print LOG '<TR><TD>';
print LOG @group[1];
print LOG '</TD>';
print LOG '<TD><A HREF=';
print LOG @url[1];
print LOG '>';
print LOG @description[1];
print LOG '</A></TD></TR>';
print LOG "\n";
print LOG @logdata;
truncate LOG, (tell logfile);
close(LOG);

This worked like a charm, I liked the reverse open idea just couldnt get it to work.

Thanks everyone


:thumbsup

willow 08-24-2002 12:14 PM

perldoc -f flock

Here's a mailbox appender for BSD systems.

use Fcntl ':flock'; # import LOCK_* constants

sub lock {
flock(MBOX,LOCK_EX);
# and, in case someone appended
# while we were waiting...
seek(MBOX, 0, 2);
}

Just to clarify what the seek is really for and why you usually get away without needing it. The whole exercise however was to avoid race conditions.


All times are GMT -7. The time now is 12:12 PM.

Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123