SQL Server and scalability

2 minuut gelezen

In his last two posts Anko describes the Peer-to-Peer-replication in SQL Server 2005. It is a valueable tool in improving both the availability and scalability of the database tier. Because the other tiers of the application can be easily 'scaled out' using load balancing this directly affects the scalability and availability of the entire application.

What is scalability? Wikipedia defines it as "Scalability is a desirable property of a system, a network or a process, which indicates its ability to either handle growing amounts of work in a graceful manner, or to be readily enlarged". If your application supports 100 concurrent users, how difficult would it be to support 1000 users? A 'graceful manner' would be to add some extra hardware to the server. It would not be graceful if you had to re-build the entire application.

With the continuing growth of the power of hardware, it becomes increasingly rare that an application would require more than a single database-server to deal with the load. Adding more hardware is called 'Scale up': a more powerful server enables more transactions. Although in most cases a normal server would be sufficient, you could spend a bit more: for a mere 6 million dollars you get over a million tpmC. If the business case requires these numbers of transactions, the cost of the server will probably become less important.

The alternative is 'Scaling out': adding more servers to share the load. Peer-to-Peer-replication enables this kind of scenario. The requirement for consistency between the different nodes makes the scaling out of the database much more difficult than scaling out the web/application-server.

Suppose we have two nodes, A & B, with peer-to-peer replication enabled. In an ideal situation it shouldn't matter which node processes an incoming request. For reading relatively static data this works pretty well. The real challenges come with modifying data. If an insert is performed against node A, you can't succesfully perform an update to that record on node B before the insert from node A is replicated.

The are two ways of solving this problem:

  • Perform modifications on all nodes before completing the transaction
  • Divide the database into logical pieces. Modifications to customers with a name starting with A-M are performed on node A; modifications for names from N to Z are performed on node B. </ul>

    The last solution is what the article on MSDN suggests. Another solution mentioned in the article is to have all modifications performed on one of the nodes. Read-only tasks could be performed against the other node. This comes pretty close to physically clustering the database, but would still require the application to be aware of the configuration of the database. The application should be able to cope with the situation where an record that's inserted in one node, is not being returned from the 'read-only'-node.

    The use of Peer-to-Peer-replication is something that should be decided when developing the architecture of the entire solution. At that time it can be weighed against other options like dividing the application into different subsystems with their own databases and caching data on the web/application-server.

    So there are multiple options to scale SQL Server to handle a large number of transactions. Spending enough time on architecture and adding more hardware should cover most requirements. Combined with scaling out it's hard to imagine a load that it would not be able to handle.

    Next post we'll look at availabilty.