Quote:
Originally Posted by AdultKing
You haven't really provided enough information to give you any useful advice.
However generally speaking if you are dealing with massive databases then using a standard dedicated server model is not the way to go.
You would be using some kind of object storage that was flexible so that it could size up or down according to your needs.
Amazon, OVH, IBM, Google, HP and countless other companies provide highly scalable and low latency storage solutions. The OVH Public Cloud is an interesting product to look at because it's lower cost than many competitors.
Then you need to engineer a way to store and retrieve your data objects from the data store.
Will you need a high level of redundancy? Will you need to index the data before returning it to the end user ? Do you need to scale up and down easily or are you going to provide a fixed retrieval solution on a dedicated machine ?
Engineering big data projects is non trivial and you really need to put in a great deal of work to design a system that will continue to meet your needs as you grow and as user demand scales up and down. You don't want to be paying for compute or storage that you're not using so something cloud based where you pay for what you use is desirable.
Some products to look at
https://aws.amazon.com/solutions/
IBM - Cloud Computing for Builders & Innovators
https://www.ovh.co.uk/cloud/
|
"Design a system that will continue to meet your needs" - this is exactly what we all are paying close attention to now, well before launch. Use of a cloud based storage system combined with traditional servers is what has been the plan.
I have already forwarded your links for them to look into.
Thanks AdultKing.
