Database Design: Block Size Design

A database designer has a number of factors to consider when determining a tablespace strategy for a database. A big influence on this is the block size of a tablespace, and if not the default size, how they will work with the various caches.

Default Block Size

The chosen default block size of your database would have been influenced by the type of application to be run on the database and the operating system of its server (e.g. OLTP databases favour smaller block sizes whereas data warehousing favour larger block sizes). Choosing the default block size is outside the scope of this article.

Available Buffer Caches

Before decisions can be made on the tablespace strategy, an understanding of the available caches is required.

For each block size used in the database there is an associated buffer cache. In addition to the buffer caches there is a KEEP pool cache to retain frequently used smaller tables, and a RECYCLE pool cache for infrequently used larger tables. The keep and recycle caches can only be used for data held in a default block size and those segments configured to use them will always use these caches.

It is vital to allocate memory effectively across all the caches to avoid an I/O-bound database. Oracle Database 10g automates the management of certain aspects of shared memory via its Automatic Shared Memory Management (ASMM) feature and provides the following benefits:

  • Reduces the risk of running out of shared pool memory
  • Optimally uses available memory
  • Improves database performance by constantly matching memory allocations and instance memory needs

The ASMM readjusts the sizes of the main pools based on existing workloads.

  • Buffer cache (DB_CACHE_SIZE)
  • Shared pool (SHARED_POOL_SIZE)
  • Large pool (LARGE_POOL_SIZE)
  • Java pool (JAVA_POOL_SIZE)
  • Streams pool (STREAMS_POOL_SIZE)

Some pools in the SGA are not subject to dynamic resizing, and must be specified explicitly, e.g.:

  • Log buffer (LOG_BUFFER)
  • DB buffer cache keep pool (DB_KEEP_CACHE_SIZE)
  • DB buffer recycle pool (DB_RECYCLE_CACHE_SIZE)
  • DB buffer nK block size pools (DB_nK_CACHE_SIZE)

Default Block Size Caches

The total default sized buffer cache is made up the following caches:

  • Buffer cache (DEFAULT) – default buffer cache – these are considered as the “warm” blocks in the database.
  • db buffer cache keep pool (KEEP) – for segments accessed fairly frequently which if kept in the default cache would be aged out – these are considered to be the “hot” blocks in the database and ones that can lead to block contention if not properly managed.
  • db buffer recycle pool (RECYCLE) – for large segments accessed randomly which would cause excessive buffering – these are the “do not care” blocks which should not be allowed to interfere with the rest of the database.

These caches have the default block size and therefore only used with objects stored in default block size tablespaces. Objects by default will use the DEFAULT buffer cache, unless the BUFFER_POOL storage parameter is set to override it to either the KEEP or RECYCLE buffer cache.

Non-default Block Size Tablespaces

Non-default block sizes may be used for different reasons:

  • Smaller sizes reduce the number of rows in a block and can therefore reduce the risk of contention on hot blocks.
  • Larger blocks reduce the overhead on the physical read, i.e. for the one read operation (which has its overheads) you retrieve more data. These are favoured for data subject to large scans (full table scans or index range scans).
  • Indexes favour large block sizes (though this seems to be highly debated and many arguments against this can be found on the Internet). The theory states that larger block sizes on indexes reduce the number leaf blocks and improve fast full index scans (FFI). However, if indexes are held in non-default block size tablespaces, then they cannot benefit from the KEEP and RECYCLE cache, which its table data may be utilising. My angle on this is if we are experiencing sufficient large IFFs in an OLTP database to warrant a separate large block size tablespace, then there is probably an issue with the design.
  • Large rows which do not fit in a block will suffer from row chaining and therefore would favour larger blocks to reduce the overhead of block chaining. However, a larger block size is not the only solution. If the row is large because of inline LOBs, then these can be stored out of line. Multiple VARCHAR2(4000) columns which fill up could be further normalised into a more generic table holding a row per column, etc – in effect simulating the LOBSEGMENT storage of LOBs.

Storage of LOBs

Columns with Character Large Object (CLOB) data types require special consideration for their storage parameters and their potential for caching. See the Oracle document LOB Performance Guideline for a detailed explanation.

In summary, a CLOB column on tables created with default settings will hold its data in-line with the row in the same block (or chained block), but if the data for a specific row exceeds 4k in size, then it will be held out of line in a separate LOBSEGMENT for that row.

The storage and caching considerations are different for in-line and out-of-line storage. In-line storage behaves in the same way as any other non-LOB data on the table and is subjected to the same caching. Out-of-line data is held in CHUNKS, sized in multiples of the block size (default is the block size).

A separate tablespace and BUFFER_POOL setting can be specified for the out-of-line storage which would help prevent large amounts of data flushing through the standard buffer cache. Whether the LOB segments are cached or not can also be controlled.

Conclusion

The choice of the default block size for a database is very important and requires careful consideration. More importantly, the use of non-default block size tablespaces needs careful consideration to ensure that the maximum benefits of the various buffer caches are achieved and to ensure decisions do not actual have a etrimental effect on the performance of the database.

Many of the points discussed in this article can be pre-empted during database design and therefore avoid the database performance crises that can often hit a development project.

0

Leave a Reply