ヒュージ(huge)ページの設定(GT.M x86[-64] on Linux用)

Huge pages are a Linux feature that may improve the performance of GT.M applications in production. Huge pages create a single page table entry for a large block (typically 2MiB) of memory in place of hundreds entries for many smaller (typically 4KiB) blocks. This reduction of memory used for page tables frees memory for other uses, such as file system caches, and increases the probability of TLB (translation lookaside buffer) matches, both of which can improve performance. The performance improvement related to reducing the page table size becomes evident when many processes share memory as they do for global buffers, journal buffers, and replication journal pools. Configuring huge pages on Linux for x86 or x86_64 CPU architectures help improve:

  1. GT.M shared memory performance: When your GT.M database uses journaling, replication, and the BG access method.

  2. GT.M process memory performance: For your process working space and dynamically linked code.

[注意] 注意

At this time, huge pages have no effect for MM databases; the text, data, or bss segments for each process; or for process stack.

While FIS recommends you configure huge page for shared memory, you need to evaluate whether or not configuring huge page for process-private memory is appropriate for your application. Having insufficient huge pages available during certain commands (for example, a JOB command - see complete list below) can result in a process terminating with a SIGBUS error. This is a current limitation of Linux. Before you use huge pages for process private memory on production systems, FIS recommends that you perform appropriate peak load tests on your application and ensure that you have an adequate number of huge pages configured for your peak workloads or that your application is configured to perform robustly when processes terminate with SIGBUS errors. The following GT.M features fork processes and may generate SIGBUS errors when huge pages are not available-JOB, OPEN of a PIPE device, ZSYSTEM, interprocess signaling that requires the services of gtmsecshr when gtmsecshr is not already running, SPAWN commands in DSE, GDE, and LKE, argumentless MUPIP RUNDOWN, and replication-related MUPIP commands that start server processes and/or helper processes. As increasing the available huge pages may require a reboot, an interim workaround is to unset the environment variable HUGETLB_MORECORE for GT.M processes until you are able to reboot or otherwise make available an adequate supply of huge pages.

Consider the following example of a memory map report of a Source Server process running at peak load:

$ pmap -d 18839
18839: /usr/lib/fis-gtm/V6.2-000_x86_64/mupip replicate -source -start -buffsize=1048576 -secondary=melbourne:1235 -log=/var/log/.fis-gtm/mal2mel.log -instsecondary=melbourne
Address   Kbytes Mode Offset   Device Mapping
--- lines removed for brevity -----
mapped: 61604K writeable/private: 3592K shared: 33532K
$

Process id 18839 uses a large amount of shared memory (33535K) and can benefit from configuring huge pages for shared memory. Configuring huge pages for shared memory does not cause a SIGBUS error when a process does a fork. For information on configuring huge pages for shared memory, refer to "Using huge pages" and "Using huge pages for shared memory" sections. SIGBUS errors only occur when you configure huge pages for process private memory; these errors indicate you have not configured your system with an adequate number of huge pages. To prevent SIGBUS errors, you should perform peak load tests on your application to determine the number of required huge pages. For information on configuring huge pages for process private memory, refer to "Using huge pages" and "Using huge pages for process working space" sections.

As application response time can be deleteriously affected if processes and database shared memory segments are paged out, FIS recommends configuring systems for use in production with sufficient RAM so as to not require swap space or a swap file. While you must configure an adequate number of huge pages for your application needs as empirically determined by benchmarking / testing, and there is little downside to a generous configuration to ensure a buffer of huge pages available for workload spikes, an excessive allocation of huge pages may affect system throughput by reserving memory for huge pages that could otherwise be used by applications that cannot use huge pages.

ヒュージ(huge)ページを使用

前提条件

注意点

A 32- or 64-bit x86 CPU running a Linux kernel with huge pages enabled.

All currently Supported Linux distributions appear to support huge pages; to confirm, use the command: grep hugetlbfs /proc/filesystems which should report: nodev hugetlbfs

libhugetlbfs.so

Use your Linux system's package manager to install the libhugetlbfs.so library in a standard location. Note that libhugetlbfs is not in Debian repositories and must be manually installed; GT.M on Debian releases is Supportable, not Supported.

Have sufficient number of huge pages available.

To reserve Huge Pages boot Linux with the hugepages=num_pages kernel boot parameter; or, shortly after bootup when unfragmented memory is still available, with the command: hugeadm --pool-pages-min DEFAULT:num_pages

For subsequent on-demand allocation of Huge Pages, use: hugeadm --pool-pages-max DEFAULT:num_pages

These delayed (from boot) actions do not guarantee availability of the requested number of huge pages; however, they are safe as, if a sufficient number of huge pages are not available, Linux simply uses traditional sized pages.

Using huge pages for shared memory

To use huge pages for shared memory (journal buffers, replication journal pool and global buffers):

  • Permit GT.M processes to use huge pages for shared memory segments (where available, FIS recommends option 1 below; however not all file systems support extended attributes). Either:

    1. Set the CAP_IPC_LOCK capability needs for your mumps, mupip and dse processes with a command such as:

      setcap 'cap_ipc_lock+ep' $gtm_dist/mumps

      または

    2. Permit the group used by GT.M processes needs to use huge pages with the following command, which requires root privilges:

      echo gid >/proc/sys/vm/hugetlb_shm_group
  • Set the environment variable HUGETLB_SHM for each process to "yes".

Using huge pages for GT.M process private memory

To use huge pages for process working space and dynamically linked code:

  • Set the environment variable HUGETLB_MORECORE for each process to "yes".

Although not required to use huge pages, your application is also likely to benefit from including the path to libhugetlbfs.so in the LD_PRELOAD environment variable.

If you enable huge pages for all applications (by setting HUGETLB_MORECORE, HUGETLB_SHM, and LD_PRELOAD as discussed above in /etc/profile and/or /etc/csh.login), you may find it convenient to suppress warning messages from common applications that are not configured to take advantage of huge pages by also setting the environment variable HUGETLB_VERBOSE to zero (0).

Refer to the documentation of your Linux distribution for details. Other sources of information are:

[注意] 注意
  • Since the memory allocated by Linux for shared memory segments mapped with huge pages is rounded up to the next multiple of huge pages, there is potentially unused memory in each such shared memory segment. You can therefore increase any or all of the number of global buffers, journal buffers, and lock space to make use of this otherwise unused space. You can make this determination by looking at the size of shared memory segments using ipcs. Contact FIS GT.M support for a sample program to help you automate the estimate.

  • Transparent huge pages may further improve virtual memory page table efficiency. Some Supported releases automatically set transparent_hugepages to "always"; others may require it to be set at or shortly after boot-up. Consult your Linux distribution's documentation.

inserted by FC2 system