What is DISK I/O and how does it affects the performance of server?
What is DISK I/O?
The term DISK I/O deals with the input/output operation that includes read or write one and are defined in KB/s. These functions are involved in a physical disk. In a simpler version, it is the speed with which the data transfer takes place in between the hard disk and the RAM. In other words to understand what is disk i/o? it can be answered as, disk I/O characterised as every process that involves writing to or reading from a storage device which on a shared web hosting server will be the hard disk drive which is also called as HDD. I/O processes to an HDD are particularly slow when compared with solid-state memory such as RAM, HDD I/O is, on average, 2,000 times slower.
It enables you to monitor the Read and Write operations of logical disk on your system and set thresholds. You get alerted if below-mentioned metrics reach the levels preset by you:
- Writes/sec – write operations rate.
- Reads/sec – read operations rate.
- Busy time – the % of elapsed time when your particular disk drive was busy in servicing write or read requests.
- Queue length – the number of requests on the disk that are in the queue.
This explains you well about what is DISK I/O
What impacts I/O performance?
For random disk access that is a database, mail server, file server, etc, you should focus on how many input/output operations can be performed per second, this term is called as IOPS.
It’s an important factor in terms performance no matter if your using simple Linux shared hosting server or even a dedicated server. In fact, even cloud servers in AWS gives you an option of selecting the right type of IOPS.
Four primary factors that impact IOPS:
- Multidisk Arrays – More disks in the array mean greater IOPS. If one disk can perform 150 IOPS, two disks can perform 300 IOPS. At times, this causes an issue.
- Average IOPS per-drive – The greater the number of IOPS each drive can handle, the greater the total IOPS capacity. This is largely determined by the rotational speed of the drive.
- RAID Factor – The applications installed must be likely using a RAID configuration for storage, which means you are using multiple disks for reliability and redundancy. Some RAID configurations have a powerful penalty for write operations. For RAID 6, every write request requires at least 6 disk operations. For RAID 1 and RAID 10, a write request requires only 2 disk operations. This concludes that the lower the number of disk operations, the higher the IOPS capacity.
- Read and Write Workload – If at all you have a high percentage of write operations and a RAID setup that performs many operations for each write requests (like RAID 5 or RAID 6), your IOPS will be significantly lower.
Monitoring disk I/O
For monitoring disk I/O it is important to measure disk performance on data-heavy servers. This will help you to understand how changes affect your disk performance over time. See the following steps:
- Find the most active files, file systems, and logical volumes:
- You can check if “hot” file systems are better located on the physical drive or be spread across multiple physical drives? (lslv, iostat, filemon)
- You can see if “hot” files local or remote? (filemon)
- Does paging space dominate disk utilisation? (vmstat, filemon)
- You need to check if there is enough memory to cache the file pages being used by running processes? (vmstat, svmon)
- you need to see if the application perform a lot of synchronous (non-cached) file I/O?
- Determine file fragmentation:
- Check if the “hot” files are heavily fragmented? (fileplace)
- Find the physical volume with the highest utilisation:
- Is the type of drive or I/O adapter causing a bottleneck? (iostat, filemon)