As a developer, I always less focus on the database. But the database configuration is the key to the performance of SharePoint.
Here is the some key points:
run the DBCC CHECKTABLE on the tables that you find having some sort of fragmentation in all the the SharePoint Databases. You can schedule an SQL Agent Job to run the command
The DBCC CHECKDB command should be run on SharePoint DBs once a week.
Initial Size for primary data file,transaction log
There are a few performance concerns with these Files settings. The Initial Size value is small; the Autogrowth value is small; and the Path of the files is pointing to the default directory on drive C.
For instance, if you were to upload a 10 MB file into this database using the default Initial Size and Autogrowth settings, SQL Server would have to lock the database 8 to 10 times to grow the data file in 1-MB increments until there was enough room to accept the 10-MB file you wanted to upload. Furthermore, because the log file Initial Size is small and its Autogrowth setting is at 10 percent increments, this file would also have to grow to accept the file being uploaded. Also, each time these files are enlarged in 1-MB increments, it causes fragmentation of your hard drive. As you can imagine, this can have an enormous impact on your SharePoint performance.
This is why it is important that you carefully consider how much information will be contained within most of your SharePoint databases, as well as how much information will be added, modified, or deleted, before you modify the Initial Size setting in the Model database. After you make the change, all new databases created using the Model database will begin with that Initial Size value, which will eliminate—or at least reduce—the need for Autogrowth to occur. There is no magic number that is best for the Initial Size setting of the content databases; you must perform a careful analysis to make that determination yourself. However, the best practice is that the size of your content databases should not exceed 100 GB. This is a soft limit that will increase the chances of performing a recovery in less than four hours.
|Items per view||5000|
|Documents per library||10 million|
200GB (up to 1TB for workloads)
|Simultaneous Doc Editors||10 (max at 99)|
|Column||New Row Wrapping (8,000 bytes)|
|Content Databases per Web App||300|
|App Pools per web server||10|
|Indexed (Crawl Count)||100 Million per search Application|
|Site Collections per Web App||