View Single Post
Old 04-18-2016, 02:24 PM  
johnnyloadproductions
Account Shutdown
 
Industry Role:
Join Date: Oct 2008
Location: Gone
Posts: 3,611
GFY stats, how I gathered, parsed, and queried the data

I had gathered and parsed the data before but redid it recently with a cleaner approach. Here's how I gathered, parsed, and queried the data from GFY.

First start was gathering all the text from all the threads, I simply used a PHP script that followed redirects and had logic to determine if a thread were longer than 50 posts. This was the only place where I ran into issues. If a thread were exactly 50 pages long it would create a spider trap and the bot would keep going to a new page, based on the logic, and get served the exact page again. I discovered this when the crawl was already taking place so I simply lowered the maximum limit to follow long threads and deleted threads there were labeled at being 199 or pages longer. This removes really long threads and contest threads but should only effect overall 1-2% of fidelity.

I used 4 bots running in parallel, each bot was on a seperate digital ocean vps, each bot ran for around 2-3 days, I can't remember exactly. I used a cron job that started the script on reboot. Here's the script I used:

Code:
<?php
/**
 * Created by JetBrains PhpStorm.
 * Date: 5/16/14
 * Time: 6:43 PM
 * To change this template use File | Settings | File Templates.
 */



//
$GFY_THREAD = "https://gfy.com/showthread.php?t=";
$thread_ID = 900001;
$thread_max = $thread_ID + 300000;
$DIRECTORY = "/home/GFY/threads/";
$curl = curl_init();
for ($thread_ID = 900001; $thread_ID < $thread_max; $thread_ID++)

{

    $fp = fopen($DIRECTORY . "/" . $thread_ID . ".txt", "w");

    $curl = curl_init();
    curl_setopt($curl, CURLOPT_URL, $GFY_THREAD . $thread_ID . "");
    curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1);
    curl_setopt($curl, CURLOPT_FILE, $fp);

    $result = curl_exec($curl);

    fwrite($fp, $result);
    curl_close($curl);
    fclose($fp);


    /**
     * Pagination Loop for threads with more than 49 posts
     */
    $fp = file_get_contents($DIRECTORY . "/" . $thread_ID . ".txt");
    echo "\n Thread " . $thread_ID;
//    echo $fp;

    if (preg_match('/\btitle="Next Page\b/i', $fp))

    {
        echo "it did match";
        //checks if thread has more than 49 posts
        $page_value = 2;

        for ($page_value = 2; $page_value < 200; $page_value++)

        {

            $fp = fopen($DIRECTORY . "/" . $thread_ID . "_" . $page_value . ".txt", "w");
            $curl = curl_init();
            curl_setopt($curl, CURLOPT_URL, $GFY_THREAD . $thread_ID . "&page=" . $page_value);
            curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
            curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1);
            curl_setopt($curl, CURLOPT_FILE, $fp);


            $result = curl_exec($curl);
            fwrite($fp, $result);


            curl_close($curl);
            $fp = file_get_contents($DIRECTORY . "/" . $thread_ID . "_" . $page_value . ".txt");
            if (!preg_match('/\btitle="Next Page\b/i', $fp))

            {

                break;

            }

            fclose($fp);
        }

    }

    usleep(500);

}
?>

Each bot harvested somewhere in the 50-55 GB range of uncompressed text files, afterward I rared all of them and then downloaded them to my local drive.

Total scraped around 200 GB.


Pro tip: If you ever plan on parsing thousands or millions of text documents, make sure to do it on a solid state drive.

GFY doesn't have a clean DOM and even has left overs from TexasDreams (:
Hence the parsing script looks like a mess and all the try except blocks.

I ran this on a 2010 intel i7 iMac and ran 1 folder at a time, it took about 48 hours + or - 4 hours for each bot scrape to parse.

Total time, around 6 days of full constant parsing.

(parsing script in next post, this post was too long)
Around 50 posts were parsed a second.

Technologies used:
Python 2.7
BeautifulSoup4 (for parsing)
SQLAlchemy for ORM
MySQL



For the queries, the initial ones I just used a simple query in Sequel Pro on my Mac.

Highest Post count?
Code:
SELECT DISTINCT username, postcount FROM posts ORDER BY postcount DESC;
I had to do a lot of manual filtering in excel because people posted during the scrape and I hardcorded the postcount into each row :p

Highest thread starter
Code:
SELECT username, COUNT(*) as count FROM threads GROUP BY username ORDER BY count DESC;
For the thread and postcount frequency through the board history I had to run a nested query in python as follows. It took over an hour to tabulate all the post counts as there's over 20 million of them to filter through for each query.
Code:
for i in range(2001, 2017):
    for j in range(1, 13):
        thread_activity = "SELECT COUNT(*) as count FROM threads WHERE year = {0} and month = {1}".format(i, j)
        q = connection.execute(thread_activity)

        for r in q:
            print str(r[0])
I simply used excel to create charts of the data.

Here's a link with the mysql dump I'll leave it up for a week: GFY parsed DB



johnnyloadproductions is offline   Share thread on Digg Share thread on Twitter Share thread on Reddit Share thread on Facebook Reply With Quote