![]() |
Wowza kicks ass - 500mbs and the server isn't even sweating
A while ago, I had a public argument with someone who dissed WowzaMediaServer saying it was shite and crapped out continuously. I disagree saying he had badly configured it. He went nuts saying he was an expert sys admin and it was because wowza was written in java, it couldn't take the load blah blah
Well, here's the proof that Wowza kicks ass. Just peaked bandwidth at 500mbs, which is pretty much pure Wowza streaming - apache is dissing up only ~10mbs. Wowza is consuming only 1.2GB of ram to do this, and the server is barely breaking sweat. some pics for the geeks... http://www.borkedcoder.com/images/gfy/load/bw.gif http://www.borkedcoder.com/images/gf...che_volume.gif http://www.borkedcoder.com/images/gfy/load/ram.gif http://www.borkedcoder.com/images/gfy/load/cpu.gif http://www.borkedcoder.com/images/gfy/load/load.gif |
Right.
Cool story bro. |
yeah Wowza kicks ass, only don't play too much with the JAVA heap settings.
|
Quote:
|
Quote:
|
we have been seeing really good results with wowza too
pretty solid software |
Good to know. Java is generally known to be slow and and a memory hog but it seems to be working out for you. The Wowza FAQ says to start you need at least a quad core system with 4 GB of RAM as the baseline minimum, going up from there based on traffic, and preferably RAID 10, their FAQ says. What kind of hardware are you using?
|
Quote:
|
I love you, borked ...
|
Quote:
the server is a bit of a beast: Intel Server motherboard S5000PSL INTEL BX80574E5420A E5420 2.5Ghz quadcore x2 6x 4gb Kingston DDR2 667 fully buffered 2x SSD disk Intel 40GB for system 24 x Western Digital Raid Edition 4 2TB ADAPTEC 3805 up to 256 disk Raid 1 hardware Of course, wowza is running on SSDs, streaming content from 24TB array, so the SSDs surely help here. --edit here it is: Code:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND |
Quote:
|
Wowza works wonders... if you feed it the ram.
Wowza by nature will destroy disk IO if it cannot buffer the content, if the content is popular and not being sucked off the drive and cached then it will utterly destroy a server. Not sure exactly but Wowza normally decreases bandwidth by about 30-40%.. not increase bandwidth. Alteast thats the case when people switch from nginx/lighttpd. |
Quote:
Unless they changed something in wowza, it feeds the content off the folder you tell it.. it doesn't care its on SSD it cares where the content is, i don't recall any sort of buffering/caching system to place this on ssd from other content. Not trying to bash things but if your using 24tb servers for streaming content, your doing it wrong. |
Quote:
We have wowza servers running almost double what you are managing, 100% live streaming. (up to the capacity of the switch) Wowza is absolutely stable. You just have to know how to use it. Once you get your settings fine tuned you can run thousands of similtaneous connections on one reasonable spec server. :thumbsup :thumbsup |
Quote:
|
Quote:
btw, what happens when you do that? Does the fact that traffic can't get through as the nic is full have an impact on server load? It must do, because those packets have to go somewhere... just I've never been in the position to see NIC meltdown before :thumbsup |
Quote:
|
Quote:
|
500mbps? are you running a tube?
|
Quote:
The eth1 nic connected to the lan is only pushing 85mbs content to them. I just don't want to put wowza on them yet for my own reasons. You don't know what this setup is, nor how it is configured, so you are in no place to tell me I'm doing things wrong.... |
We also use Wowza, and agree with your wholeheartedly fine sire.
:thumbsup Quote:
|
Quote:
|
Quote:
Kinky setup :D Still need to switch to Wowza3. I run Wowza2 the following config Supermicro 6026T-3RF XEON i5520 2U Barebone w/ 720W Redundant PSU (Black) Dual Intel 5600 / 5500 series Xeon Quad/Dual-Core 4x Seagate Barracuda 7200.11 (RAID 5) 6 GB memory Server uses a 1 TB NAS RAID5 storage device |
Quote:
Your right, clearly we have no clue since we only run about, some retarded amount of large tube sites and get referrals for such hosting every day. My point is simply there are most cost effective ways to scale, than having such large/expensive machines doing all the work. We like saving people money, no point in paying for things or buying things that are not needed. Unlike a lot of other hosts we care about how machines perform and don't like grinding people into upgrading because the original hardware/design was not sufficient for what their goals were. |
Quote:
|
Quote:
|
Quote:
This server isn't expensive and is a shit loss expensive than a cluster to reach 24TB. I fully know how to run a server cluster to be able to scale size. This isn't my server setup, it is my client's and I know his needs and a 24TB system is plenty for his needs. You run a different operation, so your needs are different too. |
Quote:
He is a smart guy, but has a major chip or ego when it comes to the master of the universe thing. You will find yourself locking horns more often then not. I am not saying he is right or wrong. There are many different ways to skin a cat. However, you will find that arguing with him is like pissing in the wind. No offense. :2 cents: |
Quote:
Look I'm just trying to help. But its pretty apparent that no one appreciates feedback that goes against their own thinking/mentality. |
Quote:
Also running with OVH !!! :thumbsup |
Quote:
I like managed hosting, but like I say to my clients, you can't beat a personal sysadmin. It's like having your own PA - they know your needs and system inside out better than any managed hosting company can ever do. Simply because, one does many in a general way, the other does few in a more personalised way. |
Quote:
:2 cents: |
Quote:
|
Quote:
But as you well know in the hosting industry.... it seems few do it the same exact way. You can take 10 different random companies, whether piss ants or well known, and find that most of them manage their shit in ways that baffle the imagination. It has been a real eye opening experience in that regard to be sure. You would think there is more uniformity. But I digress... |
Quote:
:2 cents: |
Quote:
To grow to 24, the card was replaced along with something else I forget. I'd have to dig through the emails to find what exactly it was, but I cannot be bothered. Suffice it to say, to scale from 12 to 24TB cost the price of buying 12x2TB server-grade disks. This server can scale to another 48 without haste, so I believe, with the possibility to grow that to 2x96 if required. I'll cross that bridge when I come to it, but for now, we're a long way off filling 24TB, which is a shitload of content. The hw side of things I leave to others who are better qualified and in the know than I - I know my limits. But thanks for your constructive feedback. :thumbsup |
Quote:
I'm currently migrating someone away from a managed solution to their own private setup so they aren't tied down and, while not naming the company that is really well known here, I am astounded at how they setup the servers.... Anyway, I really didn't want this to become a cock-battling thread of hosting companies! I just want more people to move over to wowza... cos I have an ulterior motive :winkwink: |
Quote:
|
Quote:
Java isn't that bad these days. |
Quote:
http://www.borkedcoder.com/images/gfy/load/iowait.gif |
Quote:
lspci should show you what cards are currently installed. Are you using lvm to concatenate the sdd and sdc together? or are they separate? It looks like it is, once you cross the barrier of the disk usage on the sdd and cross into disk space on the sdc partition/set you will see a breath of fresh air, currently your supporting the disk IO on one raid set. I could be wrong but thats what it looks like. Disk contention is overcame by usually creating smaller raid sets on large spindle sizes, i.e 8x groups of 6 drives and if you concatenate them via lvm as you add content it will spread to different spindles. Once data is contained and "cached" via the system it wont seek them since it knows where they are on that raidset. + you can utilize the platters more efficiently because you have now 8 sets of disks doing various IO tasks vs 1-2 sets doing the same task. MUCH better disk IO/throughput that way. |
yup, it's still on the adaptec
Code:
09:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09) |
Quote:
i.e hdparm -a /dev/sdd will show a result.. its probalby 256. Jack that up to 16384 i.e hdparm -a 16384 /dev/sdd so its doing by far less reads when its grabbing data for distribution. this value can be tune/adjusted for whatever your traffic/source files are like. Makes a world of a difference on large files. Also blockdev command works great too, shorter versionof using hadparm. blockdev --report will give some information on blocksize, make the readahead a multiplier of the blocksize. |
OK, now I'm learning stuff :thumbsup
set the readahead to 16384. I'll give it a bit of time to see how that affects the IO reads. Cheers! |
Spudstr do you have any server specials?
|
Quote:
|
Quote:
|
Quote:
|
Nice Borked! We love Wowza too and found that we can do nearly 1Gbps with wowza with our cloud servers. where else can you find an application that can operate at nearly line speed performance? :pimp
|
All times are GMT -7. The time now is 07:00 AM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123