1 Designing high performance I/O for SQL Server12/3/2017 8:51 PM Designing high performance I/O for SQL Server Thomas Kejser / Mike Ruthruff SQL Server Customer Advisory Team Microsoft Corporation © 2007 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
2 SQL Server Customer Advisory Team (SQLCAT)Works on the largest, most complex SQL Server projects worldwide MySpace million concurrent users at peak time, 8 billion friend relationships, 34 billion s, 1 PetaByte store, scale-out using SSB and SOA Bwin – Most popular European online gaming site – database transactions / second, motto: “Failure is not an option”; 100 TB total storage Korea Telecom - Largest telco in Korea serves 26 million customers; 3 TB Data Warehouse Drives product requirements back into SQL Server from our customers and ISVs Shares deep technical content with SQL Server community SQLCAT.com
3 SQLCAT and SQL CSS Invite You…To the SQL Server Clinic where the most experienced SQL Server experts in the world will be waiting to talk with you. Bring your toughest Questions / Challenges to the experts who have seen it all Architect and Design your future applications with experts who have done it before with some of the largest, most complex systems in the world Or just stop in to say hello! ROOM 611
4 SQL Server Design Win ProgramTarget the most challenging and innovative SQL Server applications 10+ TB DW, 3k/tran/s OLTP, Large 500GB+ Cubes, Competitive migrations, Complex deployments, Server Consolidation (1000+) Invest in large scale, referenceable SQL Server projects across the world Provide SQLCAT technical & project experience Conduct architecture and design reviews covering performance, operation, scalability and availability Offer use of HW lab in Redmond with direct access to SQL Server development team Work with Marketing Team Developing PR
5 Agenda What is an I/O? Deployment Considerations & Best Practices12/3/2017 Agenda What is an I/O? Deployment Considerations & Best Practices Typical SAN challenges Typical DAS challenges DAS vs. SAN Sizing & Tuning Practices What to monitor Benchmarking For Random I/O (aka: Tuning for IOPS) SSD Examples For aggregate throughput (aka: Tuning for MB/sec) I/O Characteristics by Workload OLTP DW and SSAS SQL Server Best Practices SQL Server 2008 I/O Considerations for SAN Deployments SAN environments continue to be more common, are evolving at a rapid pace and are increasing in complexity. I/O is a critical component to SQL Server 2008 and to be successful deploying SQL Server 2008 on SAN it is critical to understand there are many "it depends" type topics that need to be considered. Understanding how to get the basics in place, considerations related to SAN architecture, including shared storage environments and SQL Server 2008 specific considerations can increase the success rate of these deployments providing a better overall customer experience. As we continue to learn more about this rapidly changing landscape from our customers traditional practices may change. Guidance from the SQL Customer Advisory Team is based on real life customer experiences as well as learning's based on deep collaboration with our storage partners. © 2008 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
6 Designing high performance I/O for SQL ServerWhat is an I/o?
7 What is an I/O? Throughput Latency Capacity Measured in MB/sec or IOPsPerformance Monitor: Logical Disk Disk Read Bytes / Sec Disk Write Bytes / Sec Disk Read / Sec Disk Writes / Sec Latency Measured in milliseconds (ms) Avg. Disk Sec / read Avg. Disk Sec / write More on healthy latency values later Capacity Measured in GB/TB The easy one!
8 The Traditional Hard Disk DriveCover mounting holes (cover not shown) Base casting Spindle Slider (and head) Case mounting holes Actuator arm Actuator axis Platters Actuator Flex Circuit (attaches heads to logic board) SATA interface connector Power connector Source: Panther Products
9 The “New” Hard Disk Drive (SSD)No moving parts!
10 Terminology JBOD - Just a Bunch of DisksRAID - Redundant Array of Inexpensive Disks DAS Direct Attached Storage NAS Network Attached Storage SAN Storage Area Network Array: The box that exposes the LUN HBA: The Network card used to communicate with the SAN Fabric: The network between SAN components CAS Content Addressable Storage
11 Deployment Considerations & Best PracticesDesigning high performance I/O for SQL Server Deployment Considerations & Best Practices
12 Storage Selection - General12/3/2017 8:51 PM Storage Selection - General Number of drives matter More drives typically yield better speed True for both SAN and DAS ... Less so for SSD, but still relevant (especially for NAND) If designing for performance, make sure the topology can handle it Understand the path to the drives Best Practice: Validate and compare configurations prior to deployment Always run SQLIO or IOMeter to test © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
13 Storage Selection – General PitfallsBrand Transformation Presentation Storage Selection – General Pitfalls There are organizational barriers between DBA’s and storage administrators Each needs to understand the others “world” Share storage environments At the disk level and other shared components (i.e. service processors, cache, etc…) Sizing only on “capacity” is a common problem Key Takeaway: Take latency and throughput requirements (MB/s, IOPs and max latency) into consideration when sizing storage One size fits all type configurations Storage vendor should have knowledge of SQL Server and Windows best practices when array is configured Especially when advanced features are used (snapshots, replication, etc…)
14 How does the Server Attach to the Storage?Disk Cabling Topology Parallel Disk Attachment SCSI, ATA Serial Disk Attachment FC, SATA, SAS Controller Dedicated or shared controller Internal PCI or service processor within the array Difference is $1k or $60-500K for controller Network components between disks and server (Storage Area Network) Server Remote Attachment File Access: NAS - SMB, CIFS, NFS Block Level: iSCSI, Fibre Channel
15 Understand the path to the drivesThe hardware between the CPU and the physical drive is often complex Different topologies, depending on vendor and technology Rule of Thumb: The deeper the topology, the more latency Best Practices: Understand topology, potential bottlenecks and theorectical throughput of components in the path Engage storage engineers early in the process Two major topologies for SQL Server Storage DAS – Direct Attached Storage Standards: (SCSI), SAS, SATA RAID controller in the machine PCI-X or PCI-E direct access SAN – Storage Area Networks Standards: iSCSI or Fibre Channel (FC) Host Bus Adapters or Network Cards in the machine Switches / Fabric access to the disk
16 Path to the Drives - DAS PCI Bus PCI Bus Cache Controller CacheInterface Shelf Controller Cache PCI Bus Interface Shelf PCI Bus Interface Shelf Controller Cache Interface Shelf
17 Path to the Drives – DAS ”on chip”Controller PCI Bus PCI Bus Controller We will show some performance numbers from such a configuration later in this deck
18 SQL Server on DAS - General ConsiderationsBeware of non disk related bottlenecks SCSI/SAS controller may not have bandwidth to support disks PCI bus should be fast enough Example: Need PCI-X 8x to consume 1GB/sec Can use Dynamic Disks to stripe LUN’s together Bigger, more manageable partitions Cannot grow storage dynamically Buy enough capacity from the start … or plan database layout to support growth Example: Allocate two files per LUN to allow doubling of capacity by moving half the files Inexpensive and simple way to create very high performance I/O system Important: No SAN = No Cluster! Must rely on other technologies (ex: Database Mirror) for maintaining redundant data copies
19 Path to the drives - SAN Array Fabric PCI Bus PCI Bus PCI Bus PCI BusFiber Channel Ports Cache Controllers/Processors Switch PCI Bus PCI Bus PCI Bus PCI Bus HBA Switch A storage network, FC or IP based Comprised of fabric and end points What a SAN isn’t, it isn’t an array or frame, it’s all of the components in the network Array: Contains the front end ports (FC or IP), cache, service processors, disks, etc… Best Practice: Make sure you have the tools to monitor the entire path to the drives. Understand utilization of individual componets
20 SQL Server on SAN - General ConsiderationsBrand Transformation Presentation SQL Server on SAN - General Considerations Storage technologies are evolving rapidly and traditional best practices may not apply to all configurations Physical isolation practices become more important at the high end High volume OLTP, large scale DW Isolation of HBA to storage ports can yield big benefits This largely remains true for SAN although some vendors claim it is not needed There is no one single “right” way to configure storage for SQL Server on SAN SAN deployment can be complex Generally involves multiple organizations to put a configuration in place Storage ports will often be bottleneck for DW workloads Understanding paths to the disks becomes even more important and complex
21 SQL Server on SAN - Common PitfallsBrand Transformation Presentation SQL Server on SAN - Common Pitfalls Sizing on capacity Assuming physical design does not matter on SAN Over estimating the benefit of array cache Basic configuration best practices have not been followed Lack of knowledge about physical configuration and the potential bottlenecks or expected limits of configuration Makes it hard to tell if performance is reasonable on the configuration Array controller or bandwidth of the connection is often the bottleneck Key Takeaway: Bottleneck is not always the disk No (or incomplete) monitoring strategy or baseline to compare against Capture counters that provide the entire picture (see monitoring section) Overestimating the ability of the SAN or array Overutilization of shared storage Merge with previous material
22 DAS vs. SAN Feature SAN DAS Cost High- but may be offset by better utilization Low - But may waste space Flexibility Virtualization allows online configuration changes Better get it right the first time! Skills required Steep learning curve, can be complex Simple and well understood Additional Features Snapshots Storage Replication None Performance Contrary to popular belief, SAN is not a performance technology High performance for small investment Reliability Very high reliability Typically less reliable. - May be offset by higher redundancy on RAID levels Clustering Support Yes No
23 Designing high performance I/O for SQL ServerSizing and Tuning
24 12/3/2017 8:51 PM Windows View of I/O Make sure to capture all of these for the complete picture… Size impacts latency Queue Length, latency, # of I/O’s can help identify problems related to sharing Counter Description Disk Reads/Writes per Second Measures the Number of I/O’s per second Discuss with vendor sizing of spindles of different type and rotational speeds Impacted by disk head movement (i.e. short stroking the disk will provide more I/O per second capacity) Average Disk/sec Read & Write Measures disk latency. Numbers will vary, optimal values for averages over time: 1 - 5 ms for Log (Ideally 1ms or better) ms for Data (OLTP) (Ideally 10ms or better) <=25-30 ms for Data (DSS) Average Disk Bytes/Read & Write Measures the size of I/O’s being issued. Larger I/O tend to have higher latency (example: BACKUP/RESTORE) Current Disk Queue Length Hard to interpret due to virtualization of storage. Not of much use in isolation. Disk Read & Write Bytes/sec Measure of total disk throughput. Ideally larger block scans should be able to heavily utilize connection bandwidth. A common SQL Server myth is the high Avg. Disk Queue length is an indicator of bad performance. See the next slide 24 © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
25 Perfmon Counters – Explained (?)Disk Queue = 6 Disk Transfers / Sec = IOPS /sec Bytes 1 2 3 4 5 6 7 8 9 10 190 8 KB 8 KB 8 KB 8 KB Avg Disk Bytes / Transfer 32KB 32KB Disk Bytes / sec 64KB 64KB Mike: I like the intention of this slide but it very confusing to me. I don’t think it would be understandable without a live explanation Thomas: I agree. This was not an easy slide to create and I is actually very hard to visualize it. There will be a lot of pointing and explaining to do here Time T = 0 T = 1 sec Disk sec / transfer
26 SQL Server View of I/O Tool Monitors Granularitysys.dm_io_virtual_file_stats Latency, Number of IO’s Database files sys.dm_os_wait_stats PAGEIOLATCH, WRITELOG SQL Server Instance level (cumulative since last start – most useful to analyze over time periods). sys.dm_exec_query_stats Number of … Reads (Logical Physical) Number of writes Query or Batch sys.dm_db_index_usage_stats Number of IO’s and type of access (seek, scan, lookup, write) Index or Table sys.dm_db_index_operational_stats IO latch wait time, Page splits Xevents PAGEIOLATCH Query and Database file
27 Monitoring the Array - The “Complete Picture”Typical components monitored Front end port usage Bandwidth utilization, # of concurrent requests on port Throughput at the LUN level / physical disk level Physical Disk I/O Rates - Exposes spindle sharing/undersized spindle count/RAID choice issues Cache utilization % of Cache Utilized Write pending % - Impacts how aggressive array is in flushing writes to physical media Storage controller(s) utilization - Similar to monitoring CPU utilization on any server Trending over time – generally less granularity (1 min) Need array specific tools Key Takeaways: Terminology may , well actually will, vary across hardware Array monitoring is only way to get the complete picture
28 12/3/2017 8:51 PM I/O Benchmark Tools Use: Test throughput of I/O subsystem or establish benchmark of I/O subsystem performance SQLIO.exe Unsupported tool available through Microsoft cb53442d9e19&displaylang=en IOMeter Open source tool, Allows combinations of I/O types to run concurrently against test file Not meant to exactly simulate SQL Server engine I/O , their purpose is to run a variety of I/O types to “Shake-out” configuration problems Determine capacity of the configuration Avoid common pitfall – test file size too small More details on benchmarking © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
29 Benchmarking Methodology - Validate the Path to the DrivesHBA throughput , multipathing, etc… Run sequential I/O against a file that is memory resident in the controller cache Can throughput “near” theoretical aggregate bandwidth be achieved? Example: Practical throughput on 4 Gb/s Fiber Channel port = ~ 360 MB/s This could be the HBA, Switch port, Front end array ports Test the HBA load balance paths (See later) Potential Bottlenecks: Connectivity (HBA, Switch,etc), Controller/Service Processor, suboptimal host configuration
30 Benchmarking Methodology - Validate the DisksTo get a true representation of disk performance use test files of approximate size to planned data files – small test files (even if they are larger than cache) may result in smaller seek times due to “short-stroking” and skew results Test each LUN path individually and then combinations of the I/O paths (scaling up) Remember IOPs is most important for random access workloads (OLTP), aggregate throughput for scan intensive (DW) Random reads are good for this as they take cache out of the picture (assuming large test file) May need to run longer tests with sustained writes; cache will eventually be exhausted give a true representation of “disk speed” for the writes.
31 Benchmarking Methodology - Workload PatternsTest a variety of I/O types and sizes Run tests for a reasonable period of time caching may behave differently after long period of sustained I/O Relatively short tests are okay for read tests with low read cache For write-back caches, make sure you run test long enough to measure the destaging of the cache. Allow time in between tests to allow the hardware to reset (cache flush) Keep all of the benchmark data to refer to after the SQL implementation has taken place Maximum throughput (IOPS or MB/s) has been obtained when latency continues to increase while throughput is near constant
32 I/O Sizing for SQL Server - OLTPDo: Base sizing on spindle count needed to support the IOPs requirements with healthy latencies Don’t: Size on capacity Spindle count rule of thumb 10K RPM – IOPs at ‘full stroke’ 15K RPM – IOPs at ‘full stroke’ Can achieve 2x or more when ‘short stroking’ the disks (using less than 20% capacity of the physical spindle) These are for random 8K I/O Remember the RAID level impact on writes (2x RAID 10, 4x RAID 5) Cache hit rates or ability of cache to absorb writes may improve these numbers RAID 5 may benefit from larger I/O Size’s
33 Scaling of Spindle Count - Short vs. Full StrokeEach 8 disks exposes a single 900GB LUN RAID Group capacity ~1.1TB Test data set is fixed at 800GB Single 800GB for single LUN (8 disks), two 400GB test files across two LUNs, etc… Lower IOPs per physical disk when more capacity of the physical disks are used (longer seek times)
34 SQL Server on SSD - OLTP WorkloadBrand Transformation Presentation SQL Server on SSD - OLTP Workload EMC DMX-4 Array RAID 4 physical devices Log/Data on same physical devices Database size (~300GB) Random read and write for checkpoints / sequential log writes 16 core server completely CPU bound Sustained 12K IOPs Counter Average Avg Disk/sec Read (total) 0.004 Disk Reads/sec (total) 10100 Avg Disk/sec Write (total) 0.001 Writes/sec (total) 1944 Processor Time 97 Batches/sec 5100
35 … Comparing with spinning mediaBrand Transformation Presentation … Comparing with spinning media EMC DMX4 Array RAID 1+0 34 Physical Devices Data 4 Physical Devices Log Same workload/database as SSD configuration (OLTP) Nearly same sustained IO’s with ~10x number of spindles Higher latency Slightly lower throughput “Short stroking” the spinning media Counter Average Avg Disk/sec Read (total) 0.017 Disk Reads/sec (total) 10259 Avg Disk/sec Write (total) 0.002 Writes/sec (total) 2103 Processor Time 90 Batches/sec 4613
36 Keep in mind when comparingNumber of writes at the physical level are different than reported by perfmon due to RAID level Physical IO is much higher on RAID 5 SSD’s Traditional HDD is being ‘short-stoked’ resulting in more IOPs capacity for each physical drive More information on these tests symmetrix-dmx-enterprise-flash-with-sql-server-databases-wp.pdf Disks Total Logical IO Total Physical IO (RAID Adjusted) IO/s Per Device SSD (RAID5) 12,044 17,876 4,469 Traditional HDD (RAID10) 12,362 14,465 380
37 SSD Directly on PCI-e Slot> 10,000 IOPs Latency < 1ms
38 Checkpoint and “The Cache Effect” - 2GB Write CacheRead I/O’s per second decrease after checkpoint has completed Reason for drop in read throughput is transparent to host Array is writing dirty cache pages received during checkpoint impacting reads
39 Checkpoint and “The Cache Effect” - Compared to… 8GB Write CacheLarger cache results in less impact on read throughput Writes occurring in background do not have to be as “aggressive”
40 I/O Sizing for SQL Server - DWDo: Size on the aggregate throughput requirements Don’t: Only consider the number of spindles needed for this – other components are often the bottleneck (controllers, switches, HBA’s, etc…) Ensure there is adequate bandwidth (very important for sequential I/O workloads) Know the limits of your path (HBA’s, switch ports, array ports) MB/s Observed/practical throughput (per 4 Gb/s HBA) Consider aggregate connectivity limits (host -> switch -> array) Consider service processor or controller limits Often, disabling read cache or using it for readahead gives best performance
41 Validating Aggregate Bandwidth - Cached File MethodTwo 4Gb/s dual port HBA’s Theoretical throughput limit ~1.6 GB/s Two paths to each service processor (~800 MB/s theoretical limit per SP) First attempt – only ~1.0 GB/s total for both SPs Second attempt – change load balancing algorithm to round robin
42 Array Pre-fetch & True Sequential ReadsEight streams of sequential 8KB I/O size results in 83K IOPs 64 disks total across 8 RAID groups (8 LUNs) 83,000 IOPs /64 disks = per disk! 1296, 8KB IOPs per disk = 162, 64KB IOPs per disk or 81, 128K IOPs per disk Now that makes more sense Bandwidth (2x 4Gb/s HBA’s) was the bottleneck Z:\IO\SQLIO>call sqlio -kR -s600 -fsequential -o8 -b8 -LS -Fz:\io\sqlio\GHIJRSTU_IOBW_100GB.txt sqlio v1.5.SG using system counter for latency timings, counts per second parameter file used: z:\io\sqlio\GHIJRSTU_IOBW_100GB.txt file g:\iobw.tst with 4 threads (0-3) using mask 0x0 (0) file h:\iobw.tst with 4 threads (4-7) using mask 0x0 (0) file i:\iobw.tst with 4 threads (8-11) using mask 0x0 (0) file J:\iobw.tst with 4 threads (12-15) using mask 0x0 (0) file r:\iobw.tst with 4 threads (16-19) using mask 0x0 (0) file s:\iobw.tst with 4 threads (20-23) using mask 0x0 (0) file t:\iobw.tst with 4 threads (24-27) using mask 0x0 (0) file u:\iobw.tst with 4 threads (28-31) using mask 0x0 (0) 32 threads reading for 600 secs from files g:\iobw.tst, h:\iobw.tst, i:\iobw.tst, J:\iobw.tst, r:\iobw.tst, s:\iobw.tst, t:\iobw.tst and u:\iobw.tst using 8KB sequential IOs enabling multiple I/Os per thread with 8 outstanding using specified size: MB for file: g:\iobw.tst using specified size: MB for file: h:\iobw.tst using specified size: MB for file: i:\iobw.tst using specified size: MB for file: J:\iobw.tst using specified size: MB for file: r:\iobw.tst using specified size: MB for file: s:\iobw.tst using specified size: MB for file: t:\iobw.tst using specified size: MB for file: u:\iobw.tst initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: MBs/sec: latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 2 Max_Latency(ms): 724 histogram: ms: %:
43 Optimizing path to drivesLUN LUN LUN LUN HBA HBA HBA HBA HBA HBA HBA HBA Switch Switch Switch Switch In our first illustation the storage port caches have to coordinate to maintain cache coherency. This spends CPU cycles from the already overloaded storage ports. By setting a preference from a specific path to the storage port and LUN in MPIO and the SAN configuration utility we were able to more than double throughput of the storage ports. Storage Port Storage Port Storage Port Storage Port Cache Cache Cache Cache
44 I/O Characteristics by Workload TypeDesigning high performance I/O for SQL Server I/O Characteristics by Workload Type
45 The Quick Guide to I/O WorkloadsOLTP (Online Transaction Processing) Typically, heavy on 8KB random read / writes Some amount of read-ahead Size varies – multiples of 8K (see read-ahead slide) Many “mixed” workloads observed in customer deployments Rule of Thumb: Optimize for Random I/O (spindle count) RDW (Relational Data Warehousing) Typical KB reads (table and range scan) KB writes (bulk load) Rule of Thumb: Optimize for high aggregate throughput I/O Analysis Services Up to 64KB random reads, Avg. Blocks often around 32KB Highly random and often fragmented data Rule of Thumb: Optimize for Random, 32KB blocks
46 Designing high performance I/O for SQL ServerOLTP Workloads
47 OLTP Workloads I/O patterns generally random in natureSelective reads Writes to data files through periodic checkpoint operations Random in nature with heavy bursts of writes Can issue a large amount of outstanding I/O Steady writes to transaction log Many ”OLTP” deployments consist of ”mixed” workload with some amount of online reporting Will result in larger block I/O that is sequential in nature to happen concurrent with small block (~8K) I/O Can make sizing more challenging Critical to size on spindle count required to support the number of IOPs not capacity Critical to ensure low I/O latency on transaction log writes Log response impacts transaction reponse times
48 Log Writes Workload Description Pattern / Monitoring Sequential I/OWrite size varies Depends on nature of transaction Transaction “Commit” forces log buffer to be flushed to disk Up to 60KB SQL Server Wait Stats WRITELOG, LOGBUFFER, LOGMGR Performance Monitor: MSSQL: Databases Log Bytes Flushed/sec Log Flushes/sec Avg. Bytes per Flush = (Log Bytes Flushed/sec) / (Log Flushes/sec) Wait per Flush = (Log Flush Wait Time) / (Log Flushes / sec) Threads fill log buffers & requests log manager to flush all records up to certain LSN log manager thread writes the buffers to disk Log manager throughput considerations SQL Server 2005 SP1 or later Limit of 8 (32-bit) or 32 (64-bit) outstanding log writes No more than 480K “in-flight” for either SQL Server 2008 increases “in-flight” per log to 3840K (factor of 8) SQL Server 2000 SP4 & SQL Server RTM Limit log writes to 8 outstanding (per database)
49 Checkpoint / Lazy WriterWorkload Description Pattern / Monitoring Heavy bursts of random writes flushing dirty pages from buffer pool Types of Checkpoints Background/automatic checkpoints: Triggered by log volume or recovery interval and performed by the checkpoint thread User-initiated checkpoints: Initiated by the T-SQL CHECKPOINT command. Reflexive checkpoints: Automatically performed as part of some larger operation, such as recovery, restore, snapshot creation, etc. Random, but SQL Server will attempt to find adjacent pages Up to 256KB in a single I/O request Performance Monitor MSSQL:Buffer Manager Checkpoint pages / sec Lazy Writes / sec
50 Checkpoint (continued)Checkpoint Throttling Checkpoint measures I/O latency impact and automatically adjusts checkpoint I/O to keep the overall latency from being unduly affected CHECKPOINT [checkpoint_duration] CHECKPOINT now allows an optional numeric argument, which specifies the number of seconds the checkpoint should take Checkpoint makes a “best effort” to perform in the time specified If specified time is too low it runs at full speed NUMA systems spread the checkpoints to writers on each node
51 Index Seeks Workload Description Pattern / MonitoringQuery plans performing loop joins will typically do many index seeks Single row lookups in index Traverse the B-Tree of the index, retrieve single page / row OLTP workloads typically heavy on these Random I/O 8 KB Block Sizes dm_db_index_usage_stats user_seeks user_lookups Performance Monitor: MSSQL:Access Methods Index Seeks / Sec PAGEIOLATCH
52 DW and Analysis Services WorkloadsDesigning high performance IO for SQL Server DW and Analysis Services Workloads
53 General I/O CharacteristicsTypically much longer running queries than OLTP Queries touch large part of table to return small result – Optimizer will often choose hash join strategies Large table and range scans are common I/O operations which are sequential and large block size Database will typically be in simple mode Less transaction log traffic - But only if using minimally logged operations Tuning for sequential I/O can make a big difference Almost an order of magnitude improvement can be had
54 Table / Range Scan Workload Description Pattern / MonitoringQuery plans doing hash and merge joining Aggregation Queries Typical for DW workloads SQL Server may perform read-ahead Dynamically adjust size of I/O based on page continuity Standard Edition: Up to 128 pages in queue Enterprise Edition: Up to 512 pages in queue Sequential in nature I/O Up to 512KB Block Sizes SQL Server Wait Stats PAGEIOLATCH_
55 Bulk Load Workload Description Pattern / MonitoringOccurs when a bulk load operation is performed Typical for DW workloads I/O Depends on Data recovery mode SIMPLE / BULK LOGGED mode writes to database FULL writes to transaction log and flush to database Sequential I/O 64KB-256 KB Block sizes depend on database file layout SQL Server Wait Stats WRITELOG / LOGBUFFER PAGEIOLATCH_EX IMPROVIO_WAIT
56 Analysis Services – I/O PatternRandom Block sizes: 32-64KB Low latency is advantages Fragmentation of disk is often high Analysis Services cannot use data files to stripes Can selectively place partition on multiple volumes
57 Analysis Services – I/O ConfigurationBest Practice: Dedicate a single LUN for cubes Store nothing else there (Fragmentation) Typical characteristics: Reads MUCH more than it writes – write once, read many High Compression on subset of data Cubes are small compared to relational source Redudant Scale-out servers can be configured Can also use scaled out servers for high availability Strong synergy with SSD technology
58 SQL Server Best PracticesDesigning high performance I/O for SQL Server SQL Server Best Practices
59 How Many Data Files Do I Need?12/3/2017 8:51 PM How Many Data Files Do I Need? More data files does not necessarily equal better performance Determined mainly by 1) hardware capacity & 2) access patterns Number of data files may impact scalability of heavy write workloads Potential for contention on allocation structures (PFS/GAM/SGAM – more on this later) Mainly a concern for applications with high rate of page allocations on servers with >= 8 CPU cores More of a consideration for Tempdb (most cases) Can be used to maximize # of spindles – Data files can be used to “stripe” database across more physical spindles Best practice: Pre-size data/log files, use equal size for files within a single file group and manually grow all files within filegroup at same time (vs. AUTOGROW) 59 © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
60 PFS/GAM/SGAM ContentionHigh rate of allocations to any data files can result in scaling issues due to contention on allocation structures Impacts decision for number of data files per file group Especially a consideration on servers with many CPU cores PFS/GAM/SGAM are structures within data file which manage free space Easily diagnosed by looking for contention on PAGELATCH_UP Either real time on sys.dm_exec_requests or tracked in sys.dm_os_wait_stats Resource description in form of DBID:FILEID:PAGEID Can be cross referenced with sys.dm_os_buffer_descriptors to determine type of page For more details refer to:
61 Why Should I Avoid a Single File per Filegroup for Large Databases?Provides less flexibility with respect to mapping data files into differing storage configurations Multiple files can be used as a mechanism to stripe data across more physical spindles and/or service processors (applies to many small/mid range arrays) A single file prevents possible optimizations related to file placement of certain objects (relatively uncommon) Allocations heavy workloads (PFS contention) may incur waits on allocation structures, which are maintained per file.
62 TEMPDB – A Special Case Tempdb placement (dedicated vs. shared spindles) Generally recommended on separate spindles However, depends on how well you know your workload use of tempdb In some cases is may be better to place tempdb on spindles shared with data to utilize more cumulative disks PFS contention is a bigger problem in tempdb Best Practice: 1 File per CPU Core Consider using trace flag -T1118
63 TEMPDB – Understanding UsageMany underlying technologies within SQL Server utilize tempdb (index rebuild with sort in tempdb, RCSI, etc..) SQLServer:Transactions: Free Space in Tempdb (KB) Version Store counters Related DMVs sys.dm_db_session_space_usage sys.dm_db_task_space_usage, sys.dm_exec_requests Remember certain SQL features will utlize tempdb (it’s not just used by temporary objects from T-SQL) Sevice Broker, RCSI, internal objects (hash tables), online index rebuild, etc…
64 Designing high performance I/O for SQL ServerWindows
65 Windows NT I/O system InterfaceWhere does an I/O go? Application Windows Cache Manager Windows NT I/O system Interface CDFS Fat32 NTFS FTdisk driver CDROM class driver Tape class driver Disk class driver Storage Port Driver SmartArray miniport Emulex miniport Qlogic miniport
66 Configuring Disks in Windows The one slide best practiceUse Disk Alignment at 1024KB Use GPT if MBR not large enough Format partitions at 64KB allocation unit size One partition per LUN Only use Dynamic Disks when there is a need to stripe LUNs using Windows striping (i.e. Analysis Services workload) Tools: Diskpar.exe, DiskPart.exe and DmDiag.exe Format.exe Disk Manager Note that you can use DiskPart for most operations now. However, you still need DiskPar.exe and DmDiag.exe to get the geometry of the drives and manage dynamic disks
67 Dynamic Disk vs. Basic DisksFeature Basic Disk Dynamic Disk Cluster Support Yes No Mount point in cluster Software Stripe Support Software Mirror Support No, must use abilities of I/O sub system Dynamic Growth Only non-striped disks Supports alignment
68 Microsoft Technical Learning Center Located in the Expo HallVisit the Microsoft Technical Learning Center Located in the Expo Hall Microsoft Ask the Experts Lounge Microsoft Chalk Talk Theater Presentations Microsoft Partner Village
69 for attending this session and the 2009 PASS Summit in SeattleThank you for attending this session and the PASS Summit in Seattle
70 Designing high performance I/O for SQL ServerAppendix
71 Additional Resources SQL Server I/O Basics SQL Server PreDeployment Best Practices Disk Partition Alignment Best Practices for SQL Server server.aspx
72 FILESTREAM Writes to varbinary(max) will go through the buffer pool and are flushed during checkpoint Reads & Writes to FILESTEAM data does not go through the buffer pool (either T-SQL or Win32) T-SQL uses buffered access to read & write data Win32 can use either buffered or non-buffered Depends on application use of APIs FileStream I/O is not tracked via sys.dm_io_virtual_file_stats Best practice to separate on to separate logical volume for monitoring purposes Writes/Generates to FILESTREAM generates less transaction log volume than varbinary(max) Actual FILESTREAM data is not logged FILESTREAM data is captured as part of database backup and transaction log backup May increase throughput capacity of the transaction log Thomas: I suggest we put this in appendix
73 Backup / Restore Backup and restore operations utilize internal buffers for the data being read/written Number of buffers is determined by: The number of data file volumes The number of backup devices Or by explicitly setting BUFFERCOUNT If database files are spread across are a few (or a single) logical volume(s), and there are a few (or a single) output device(s) optimal performance may not be achievable by default Tuning can be achieved by using the BUFFERCOUNT parameter for BACKUP / RESTORE More Information: compression-in-sql-server-2008.aspx Thomas: I suggest we put this in appendix
74 A Higher Level – Disks and LUNsEach Disk Maps to a LUN Mapping can be discovered with Diskpart.exe A LUN abstracts the physical disks below it Understand the physical characteristics of each LUN Windows cannot “see” how the LUN is build Use vendors tool for this! Best Practice: Make sure you learn how to use that tool! Two types of basic disk MBR (Default) – Limited to 2TB GPT – In theory limit of 4 Exabyte You can convert an MBR to GPT using Disk Manager but only before the partitions are created Basic Disk = LUN
75 LUN, Volume / PartitionsEach LUN may contain more than one Volumes / Partitions Partitions are formatted using a file system NTFS or FAT Best Practise: Use NTFS If the partition is not the last partition, it can be dynamically extended Best Practise: One partition per disk/LUN Extendable space Partition / Volume
76 Creating and Formatting PartitionsSector Align new partitions at creation time Start at an offset that is a power of two (and >32) Rule of Thumb: 64KB or multiples of it Sector alignment cannot be controlled using Windows Disk Manager Windows 2000 Server DISKPAR (note the missing T) can be used to create ‘aligned’ partitions Available as part of the Windows 2000 Resource Kit Windows 2003 Server Service Pack 1 DISKPART.EXE contains a new /ALIGN option eliminating the need for DISKPAR.EXE create partition primary align=64 Windows 2008 Server Sector aligning done automatically at 1024KB boundary NTFS Allocation Unit Size Best Practice: 64KB for SQL Server For Analysis Services, may benefit from 32KB
77 Dynamic Disks – A Special CaseUsing Disk Manager, you can convert a basic disk to a Dynamic Disk Dynamic Disks allows you to create software RAID Options: RAID0 (striped Volume) RAID1 (mirrored Volume) RAID5 (striped with parity) Typical usage: Stripe (RAID0) multiple LUN togethers Can be used to work around storage array limitations Limitations / considerations: Cannot be used in clusters Cannot grow Stripes or RAID-5 dynamically Cannot be aligned Use dmdiag.exe to discover information about the dynamic drives
78 SSD - The “New” Hard Disk DriveSSD is currently used as the name for two different types of storage technology: NAND based Flash Devices (AKA: EFD, Flash) Battery backed up DDR There are ”no moving, mechanical parts” Only bound by electrical failure But special considerations apply for NAND Some big advantages Power consumption often around 20% of traditional drives Random = Sequential (for reads)! Extremely low latency on access All SSD devices are not create equal Beware of non-disk related bottlenecks Service processors, bandwidth between host/array, etc.. Game Changer!
79 SSD - Battery Backed DRAMDrive is essentially DRAM RAM on a PCI card (example: FusionIO) ...or in a drive enclosure (example: Intel X25) ...or with a fiber interface (example: DSI3400) Presents itself to Windows as a harddrive Throughput close to speed of RAM Battery backed up to persist storage Be careful about downtime, how long can drive survive with no power ? As RAM prices drop, these drives are becoming larger Extremely high throughput, watch the path to the drives
80 SSD - NAND Flash Storage organized into cells of 512KBEach cell consists of 64 pages, each page 8KB When a cell need to rewritten, the 512KB Block must first be erased This is an expensive operation, can take very long Disk controller will attempt to locate free cells before trying to delete existing ones Writes can be slow DDR ”write cache” often used to ”overcome” this limitation When blocks fill up, NAND becomes slower with use But only up to a certain level – eventually peaks out Still MUCH faster than typical drives Larger NAND devices are typically build by RAID’ing smaller devices together This happens at the hardware, disk controller level Invisible to OS and other RAID system
81 Overview of Drive Characteristics7500 rpm SATA 15.000rpm SAS SSD NAND Flash DDR Seek Latency 8-10ms 3-4.5ms 70-90µs 15µs Seq. Read Speeds 64KB ? MB/sec 800MB/sec 3GB/sec Ran. Read Speed 8KB 1-3 MB/sec Seq. Write Speeds 64KB 25 MB/sec >150MB/sec Ran. Write at 8KB 100MB/sec Peak Transfer Speed 130MB/sec Max Size / Drive 1TB 300GB 512GB N/A Cost pr GB Low Medium Medium-High High / Very High MTTF 1.4M hours 1M hours 2M hours