![]() |
Tracking thumbnail clicks
Hello guys...
I was wondering if anyone knew of a software that could track where my outgoing clicks are going to? like when someone clicks on a thumbnail, and is redirected from my site to the sponsor site, how can i find out or track where it s going? I tried GA, but it doesnt seem to have anything for thumbs. If i have like 300k thumbs, I wanna be able to track where the outgoing clicks are being redirected to? |
what i mean is track ur exit clicks from the thumbs.
|
Usually a trade script will manage traffic and clicks on thumbs but an ad script handles traffic to sponsors. I wouldn't bother sending traffic from thumbs straight to sponsors though.
|
I've done this myself in PHP for managing my own banners. Simple code. You can do it for thumbs if you want.
If you're versed in PHP, I can describe how I did it. If not, I can probably custom code it pretty quickly for your site for a reasonable sum. E-mail me if you haven't found a solution. support ~at~ manpuppy.com |
|
Quote:
I just wanna be able to make sure that their numbers correspond with mine. |
Quote:
|
Quote:
http://arrowscripts.com/ |
hmm, what controls where the clicks are going, if it is you then all you need to do is have a file like an out.php that just logs what was clicked and then redirects the user onto the site (this is how my site deals with the thumbnail clicks), except I don't track them at the moment.
It would be simple enough to do your link could be something like: Code:
<a href="out.php?page=#Your Link#"><img src="#image source" ..... ></a> Code:
<?php If you have any issues with this or need help customising it to fit your system please free to either reply back here or to drop me an e-mail on [email protected] (replace the 0's for o's) Kind Regards Rob |
Yeah...don't do that. There's definitely no reason to execute a query for every click.
|
Check out TALSv2. www.bigdotmedia.com/tals.php
|
Quote:
|
Quote:
|
hmmm, good point, I get what you mean now, so this would be more realistic if the function was built into existing queries which are running on the same connection, so if you already needed to have the db open already for what was happening on the site.
Thanks for your insight, it gives me something to think about. |
Babaganoosh...havent seen u on icq in a while! where you been hiding? :-) heheh
|
I'm usually on invisible. ;)
|
I did not knew that this is possible.
|
Quote:
Nevertheless, i wanted to add: If you are on low traffic, this won't hurt you to do the query on every click but i can add up quickly as pointed out by babaganoosh but if it adds up, hitting and writing to a flat file...locking that file if you don't want to corrupt, you will probably (meaning for SURE) hurt your server performance way more than using a DB in the first place. None solution are really scalable If you want to host it yourself/do it yourself/don't want to use 3rd party: => you need memcache (or other NoSQL DB) and either a CRON job to save memcache info int o the DB, or a 'garbage collector' like mechanism like every 100 clicks, I save what is in memcache to the DB... If you feel adventurous to trust an external party with your important data: =>perhaps even better, use an URL shortener and get the stats from them. If you feel adventurous but not too much, but still trust big google: =>or if you know how to do it, use google analytics events I can't post URL so do that search on google and you should find how to do that: "google analytics How do I manually track clicks on outbound links? " Hope that may help and you don't mind I jumped into the thread to add my grain of salt when nobody asked. :2 cents::2 cents: |
Quote:
|
Well, what would you append? I'm curious of your strategy
But locking or not, if traffic is a problem so that querying a DB would hit the server, my experience tell me having a single SPOF with one file, will be an equivalent bottleneck (if not worse in fact) even if you don't lock...you still need to actually write the file and the equivalent number of DB connection are translating into file write...(with all concurrent thread and/or process fighting against each other) memcached (for example) is very well suited for counters but this just my 2 cents and other client side and/or 3rd solution like Google Analytics / URL Shortener for sure solve the scalability problem by giving the problem to someone else :thumbsup I'm also interested to understand how you don't lock? my understanding is that by the time you open the file in append mode, and the time you write to the file with the pointer at what you believe is the end of the file, you may very well have a race condition where in between the two instructions, some else open the file...the first append and bump the second who wants to append has the pointer not at the end of the file due to the previous write. What am i missing here? I'm interested to see your idea and way of doing that from a technical point of view since this would open new ideas for me, thanks. :helpme What is your strategy, what do you append and how do you update after? |
This isn't mission critical data. There's no reason to worry about the possibility of a minuscule number of clicks not being counted. I've done this with a 250k+ tgp with well over a mil clicks and didn't notice many issues.
This was just logging clicks to thumbnails so they could be ordered by productivity so data was collected every 5 minutes and batch inserted into mysql. The numbers in my own click count and what the trade script counted were extremely close. If I were that worried about not missing a single click ever then memcached would certainly be an option. |
If you're using php's fwrite then according to the docs you don't need to flock when appending.
http://us3.php.net/manual/en/function.fwrite.php Note: If handle was fopen()ed in append mode, fwrite()s are atomic (unless the size of string exceeds the filesystem's block size, on some platforms, and as long as the file is on a local filesystem). That is, there is no need to flock() a resource before calling fwrite(); all of the data will be written without interruption. |
Thanks for your input.
My experience with using a file had been different than yours, so that's great to know you had good result in real life, I shall try it again next time then. (locking aside, all write queued (by appending) had been a killer for me in the past, file access is very often my biggest performance trouble) So i have some question : -I did not get what you meant by : "they could be ordered by productivity" ??? -Are you not locking the file when you are reading it during your batch import ? -You say you don't mind losing some click, which I would agree (if you have volume in the first place), so even if you are not locking, are you/could you losing some click with your solution? -How big is the file after 5 minutes, and how long does it take to import it ? Would you mind sharing some code showing your strategy: what and how you write and how you read an import ? Thx again. |
Quote:
Before reading the file in I would move the data files over to a temp directory for processing. I never tried to read and write at the same time. I should also mention that the thumbs were given a unique id number so the data files were unique to the thumb. E.g. 12345.jpg had all of its clicks logged to 12345.txt. I wasn't writing every click to the same file. I don't think that would have gone well at all. But when I append a file in PHP I never explicitly flock the file. Even the php.net page on the function says it is unnecessary since fwrites are atomic if the handle was opened in append mode. I suppose I could have been losing some click data but I never really cared enough to do any checking. My traffic script and my click logging script showed very similar numbers in respect to total clicks for the day so I just never worried about it. I do babysit my error logs just for fun and I never saw many errors related to the logging of clicks so I assume everything was working ok. I don't really know how long it took to process the click data from each 5 minute period. I'm sure it was less than a minute or two even at the busiest times of the day but I can't tell you exactly how long it took. As far as code goes it was nothing fancy. fopen, fwrite fclose. I just logged the ip which clicked the thumb. Same story with the importing of data. Open a file, read into an array, insert data into a table. The only thing I did that caused any kind of overhead was removing duplicate IPs from each array. I logged both total raw and total unique clicks (so whoever submitted the thumbnail couldn't increase their position by clicking their thumb over and over - at least not more than once every 5 mins). Keep in mind that this was on a relatively expensive dedicated server too, multi processor, raid, massive ram. I wasn't doing this on a virtual account on hostgator. The scripts were never a performance issue, except when I was trying to log each click to mysql immediately. That sucked. Most of my server load came from apache serving all of those thumbnails. |
All times are GMT -7. The time now is 01:00 AM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc