When it comes to capacity planning with the help of performance data in the SQL server, one can debate that performance starts with application and database design. However, there are cases where you have all the proper resources available in the business that are essential for good performance. When it comes to using the right resources, you need to determine the CPU model you have, the amount of memory, and the type of storage you use.
Most of the time, businesses overlook capacity planning and its management with the data they have to make informed choices about what they actually need. You can successfully use the performance data you have to determine the extent of capacity planning for your SQL server. Here, again, you should note that capacity planning does not mean figuring out the amount of disk space the business needs. It involves the resources in the SQL server that it must have to manage the workload.
New or present
The capacity planning for new solutions is challenging as here you need to determine estimates about the workload based on the data you collect from your business. This implies you need to ask some tough questions about the volume of data you expect from the business and can collect in the first month or the first six months of your first year in business. When you have new solutions coming in, this is the final thing that the business thinks about generally, and here you often get very vague answers. In the case of new solutions, you should make the best efforts when it comes to guessing, and in this way, you, more or less, are able to get an exact number.
If this solution comes from any vendor, you should ask the vendor for recommendations when it comes to capacity planning about the space and resources you need. They might have the data, and you might not get what you need, but it causes no harm to try and ask.
Even if the vendor does not have the information, even if your system has been running for some months now, you can send them your data with information about what the hardware looks like, its workload, and more. This information does not need to be an elaborate write-up; however, the feedback might give them the cue to be more proactive in the future.
Planning for storage
Planning for the volume of storage needed is more or less simple. It needs some planning that needs to be done upfront. Monitoring the disk space and including a query for capturing file information. This query captures the size of the files of the database, for instance, along with the used space. You need to trend the above data over time, which does not mean just a few weeks. You need to check to see how the files change over months, maybe for one to two years, as the patterns for applying an application often change. This data is simple for you to capture, and it needs a small amount of space to store.
Experts from the esteemed name in database management, RemoteDBA.com, say that it is an invaluable source of reference to keep when you are procuring storage. If you provide quantitative data about the system’s growth, you have an improved chance of getting the space needed upfront over asking for it at a later stage. When you are asking for storage space, you should ensure that you include tempdb in the calculations.
Optimizing the CPU performance does not refer to the number of CPUs your business has. Here, you need to take into account the workload and the model ( for example, the data warehouse that has large parallel queries versus the OLTP with serial queries). With the help of the above information, you are able to determine the best processor you can have for your server. Last but not least, you should not forget the licensing costs and the limitations of your edition of the SQL server. dvancing your CPU execution isn’t just about the quantity of CPUs that you have, you likewise need to think about the model and the responsibility (for example information stockroom with enormous equal questions versus OLTP with sequential questions). With this data, and a little assistance from Glenn, you can decide the best processor for your worker. Remember to consider authorizing expenses and impediments dependent on your release of SQL Server!
Memory is inexpensive, and experts recommend that you always buy the maximum volume of memory your SQL server can hold. When you read data from memory, it is faster than reading it from your disk. The more data that fits inside the memory, the better it is. You should note that the whole database does not have to fit inside the memory. You just need your working set of data to fit inside it.
When it comes to the subject of your storage performance requirements, you near many businesses talk about the IOPS or the input/output operations for every second. Read and write for every second are the operations for input and output, so here you should define the IOPS requirements for a single instance. However, if you are aware of the reads and writes and the user connections, you can do some calculations and figure out the IOPS for every user. This comes in beneficial when you have plans to increase the solution and add some more users. You should make sure that the solution scales, and one option available to you are to take the calculated IOPS for every user based on the X number of your users and estimate IOPS for your Y number of users. You then, at that point take this data to your capacity individual to examine the potential arrangements accessible. You can figure the greatest IOPS for a plate arrangement, if you have data about the circles (for example the quantity of circles, the speed, the size, and the RAID design). You can test IO throughput for a drive utilizing CrystalDiskMark, albeit this may not be conceivable if the capacity hasn’t been chosen. When it is set up, in any case, you should go through this testing to guarantee that the IOPS for a given drive can meet the normal responsibility.
IOPS are only one approach to take a gander at capacity execution. Comprehend that this information reveals to you the amount IO is happening, and preferably, in the event that you know IOPS and you have the capacity to meet the necessities, then, at that point inertness ought to be insignificant. Be that as it may, dormancy is the thing that influences execution. To figure out what inactivity exists, you’ll need to utilize an instrument like DiskSpd to benchmark the capacity. Glenn has an extraordinary article that discloses how to dissect IO execution, and afterward another article on how use DiskSpd to test it to comprehend the idleness. I enthusiastically suggest surveying the two articles on the off chance that you haven’t took a gander at capacity and execution beforehand.
Now, you can make many assumptions with the above calculations with the assumption that the way new connections use your system is similar to what it is today. However, in the end, the above might not always be true. The real figures will show when the actual system is in place.
When you understand the value or the reads with writes divided by the user connections, you will get the average IOPS for every user. You will then know how to estimate this IOPS for the solution based on the user connections anticipated. Last but not least, you can take this acquired data to your storage person to discuss the potential configurations available for your business with success. Scope quantification is about something other than knowing how much space you need for information base documents. You need to comprehend the responsibility and what it needs as far as CPU, memory, and circle assets. To do this, you need information… which implies you need catch baselines. My absolute first meeting in the SQL Server people group was in December of 2010, and it was on the subject of baselines. After six years I am as yet discussing their significance, and I am as yet hearing from individuals that they don’t have these numbers. In the event that you need to do clever, designated scope quantification, you need to gather the suitable information… else you’re simply speculating.