X-Git-Url: https://git.m6w6.name/?a=blobdiff_plain;f=docs%2Fmemcached_behavior.pod;h=f37f8a0cce7ee817d095ffd25100dfc68f0426e1;hb=f05cd5b77ca7b17440bfc7ed9f48f7c11d269767;hp=7807bc9f0496c86c854187511624eb31b50f1deb;hpb=6f42f1c77da54da0b19274cc0d6b6c9745e40de0;p=m6w6%2Flibmemcached diff --git a/docs/memcached_behavior.pod b/docs/memcached_behavior.pod index 7807bc9f..f37f8a0c 100755 --- a/docs/memcached_behavior.pod +++ b/docs/memcached_behavior.pod @@ -37,12 +37,30 @@ memcached_behavior_set() will flush and reset all connections. =over 4 +=item MEMCACHED_BEHAVIOR_USE_UDP + +Causes libmemcached(3) to use the UDP transport when communicating +with a memcached server. Not all I/O operations are supported +when this behavior is enababled. The following operations will return +C when executed with the MEMCACHED_BEHAVIOR_USE_UDP +enabled: memcached_version(), memcached_stat(), memcached_get(), +memcached_get_by_key(), memcached_mget(), memcached_mget_by_key(), +memcached_fetch(), memcached_fetch_result(), memcached_value_fetch(). + +All other operations are supported but are executed in a 'fire-and-forget' +mode, in which once the client has executed the operation, no attempt +will be made to ensure the operation has been received and acted on by the +server. + +libmemcached(3) does not allow TCP and UDP servers to be shared within +the same libmemached(3) client 'instance'. An attempt to add a TCP server +when this behavior is enabled will result in a C, +as will attempting to add a UDP server when this behavior has not been enabled. + =item MEMCACHED_BEHAVIOR_NO_BLOCK Causes libmemcached(3) to use asychronous IO. This is the fastest transport -available for storage functions. For read operations it is currently -similar in performance to the non-blocking method (this is being -looked into). +available for storage functions. =item MEMCACHED_BEHAVIOR_SND_TIMEOUT @@ -64,7 +82,9 @@ environments). =item MEMCACHED_BEHAVIOR_HASH Makes the default hashing algorithm for keys use MD5. The value can be set -to either MEMCACHED_HASH_DEFAULT, MEMCACHED_HASH_MD5, MEMCACHED_HASH_CRC, MEMCACHED_HASH_FNV1_64, MEMCACHED_HASH_FNV1A_64, MEMCACHED_HASH_FNV1_32, and MEMCACHED_HASH_FNV1A_32. The behavior for all hashes but MEMCACHED_HASH_DEFAULT is identitical to the Java driver written by Dustin Sallings. +to either MEMCACHED_HASH_DEFAULT, MEMCACHED_HASH_MD5, MEMCACHED_HASH_CRC, MEMCACHED_HASH_FNV1_64, MEMCACHED_HASH_FNV1A_64, MEMCACHED_HASH_FNV1_32, MEMCACHED_HASH_FNV1A_32, MEMCACHED_HASH_JENKINS, MEMCACHED_HASH_HSIEH, and MEMCACHED_HASH_MURMUR. +Each hash has it's advantages and it's weaknesses. If you dont know or dont care, just go with the default. +Support for MEMCACHED_HASH_HSIEH is a compile time option that is disabled by default. To enable support for this hashing algorithm, configure and build libmemcached with the --enable-hash_hsieh. =item MEMCACHED_BEHAVIOR_DISTRIBUTION @@ -138,6 +158,42 @@ connection. Enable the use of the binary protocol. Please note that you cannot toggle this flag on an open connection. +=item MEMCACHED_BEHAVIOR_SERVER_FAILURE_LIMIT + +Set this value to enable the server be removed after continuous MEMCACHED_BEHAVIOR_SERVER_FAILURE_LIMIT +times connection failure. + +=item MEMCACHED_BEHAVIOR_IO_MSG_WATERMARK + +Set this value to tune the number of messages that may be sent before +libmemcached should start to automatically drain the input queue. Setting +this value to high, may cause libmemcached to deadlock (trying to send data, +but the send will block because the input buffer in the kernel is full). + +=item MEMCACHED_BEHAVIOR_IO_BYTES_WATERMARK + +Set this value to tune the number of bytes that may be sent before +libmemcached should start to automatically drain the input queue (need +at least 10 IO requests sent without reading the input buffer). Setting +this value to high, may cause libmemcached to deadlock (trying to send +data, but the send will block because the input buffer in the kernel is full). + +=item MEMCACHED_BEHAVIOR_IO_KEY_PREFETCH + +The binary protocol works a bit different than the textual protocol in +that a multiget is implemented as a pipe of single get-operations which +are sent to the server in a chunk. If you are using large multigets from +your application, you may improve the latency of the gets by setting +this value so you send out the first chunk of requests when you hit the +specified limit. It allows the servers to start processing the requests +to send the data back while the rest of the requests are created and +sent to the server. + +=item MEMCACHED_BEHAVIOR_NOREPLY + +Set this value to specify that you really don't care about the result +from your storage commands (set, add, replace, append, prepend). + =back =head1 RETURN