{"id":127,"date":"2013-06-01T01:27:34","date_gmt":"2013-05-31T23:27:34","guid":{"rendered":"http:\/\/blog.ramses-pyramidenbau.de\/?p=127"},"modified":"2013-06-11T17:55:20","modified_gmt":"2013-06-11T15:55:20","slug":"example-implementation-of-a-least-recently-used-cache-in-c11","status":"publish","type":"post","link":"https:\/\/blog.vmexit.de\/?p=127","title":{"rendered":"Example Implementation of a Least Recently Used Cache in C++11"},"content":{"rendered":"<p>This is yet another example implementation of a Least Recently Used Cache written in C++11.<br \/>\nIt is <strong>not<\/strong> designed to be threadsafe, but thread safety could easily be reached by locking<br \/>\nthe same mutex at the beginning of all public methods.<\/p>\n<p>This Cache uses std::map and std::list as container types. A copy and move constructor is implemented as well.<br \/>\nYou may also download it <a href=\"https:\/\/blog.ramses-pyramidenbau.de\/wp-content\/uploads\/2013\/06\/LRUCache.h\">here<\/a>.<br \/>\nUsage:<\/p>\n<pre class=\"brush: cpp; light: true; title: ; notranslate\" title=\"\">\r\nLRUCache&lt;int, string&gt; cache(10); \/\/ Max. 10 Elements\r\n\r\ncache&#x5B;4] = &quot;Foobar&quot;;\r\ncache.insert(2, &quot;Barfoo&quot;);\r\n\r\ncout &lt;&lt; cache&#x5B;2] &lt;&lt; endl;\r\n\r\nauto tmp = cache.getCopy(2);\r\nif(tmp == nullptr) {\r\n  cout &lt;&lt; &quot;Not found...&quot; &lt;&lt; endl;\r\n} else {\r\n cout &lt;&lt; *tmp &lt;&lt; endl;\r\n}\r\n<\/pre>\n<p>LRUCache.h:<\/p>\n<pre class=\"brush: cpp; light: true; title: ; notranslate\" title=\"\">\r\n#ifndef LRUCACHE_H_\r\n#define LRUCACHE_H_\r\n\r\n#include &lt;memory&gt;\r\n#include &lt;map&gt;\r\n#include &lt;list&gt;\r\n\r\ntemplate &lt;typename Key, typename Value&gt;\r\nclass LRUCache {\r\n\r\nprivate:\r\n\ttypedef std::list&lt;Key&gt; HistoryType;\r\n\ttypedef typename HistoryType::iterator HistoryTypeIterator;\r\n\tHistoryType _history;\r\n\r\n\ttypedef std::map&lt;Key, std::pair&lt;Value, HistoryTypeIterator&gt; &gt; CacheType;\r\n\ttypedef typename CacheType::iterator CacheTypeIterator;\r\n\tCacheType _cache;\r\n\r\n\tsize_t _maxCapacity;\r\n\r\n\tvoid evict(size_t numElements = 1) {\r\n\t\tif(numElements &gt; _history.size()) {\r\n\t\t\tnumElements = _history.size();\r\n\t\t}\r\n\r\n\t\twhile(numElements--) {\r\n\t\t\tauto it = _cache.find( _history.front() );\r\n\t\t\t_cache.erase(it);\r\n\t\t\t_history.pop_front();\r\n\t\t}\r\n\t}\r\n\r\n\tvoid updateHistory(const CacheTypeIterator&amp; it) {\r\n\t\t_history.splice(_history.end(), _history, it-&gt;second.second);\r\n\t}\r\npublic:\r\n\t\/\/ if maxCapacity = 0 -&gt; unlimited capacity\r\n\tLRUCache(size_t maxCapacity) : _maxCapacity(maxCapacity) {\r\n\r\n\t}\r\n\r\n\t~LRUCache() {\r\n\r\n\t}\r\n\r\n\tLRUCache(const LRUCache&amp; other) : _maxCapacity(other._maxCapacity) {\r\n\t\tstd::cout &lt;&lt; &quot;Copy Con&quot; &lt;&lt; std::endl;\r\n\t\t_cache = other._cache;\r\n\t\t_history = other._history;\r\n\r\n\t\t\/\/ Adjust Iterators\r\n\t\tfor(auto it = _history.begin() ; it != _history.end() ; ++it)\r\n\t\t\t_cache.find(*it)-&gt;second.second = it;\r\n\t}\r\n\r\n\tLRUCache(LRUCache&amp;&amp; other) : _maxCapacity(other._maxCapacity) {\r\n\t\t_history = std::move(other._history);\r\n\t\t_cache = std::move(other._cache);\r\n\t}\r\n\r\n\tvoid dropCache(size_t maxRemainingElements = 0) {\r\n\t\tsize_t elementsDropped;\r\n\t\tif(maxRemainingElements == 0) {\r\n\t\t\telementsDropped = _history.size();\r\n\t\t\t_cache.clear();\r\n\t\t\t_history.clear();\r\n\t\t} else if (_history.size() &gt; maxRemainingElements) {\r\n\t\t\telementsDropped = _history.size() - maxRemainingElements;\r\n\t\t\tevict(elementsDropped);\r\n\t\t}\r\n\t}\r\n\r\n\tsize_t size() {\r\n\t\treturn _history.size();\r\n\t}\r\n\r\n\tvoid setMaxCapacity(const size_t&amp; maxCapacity) {\r\n\t\t\/\/ problematic case: shrink cache\r\n\t\tif(maxCapacity &lt; _maxCapacity &amp;&amp; maxCapacity != 0)\r\n\t\t\tdropCache(maxCapacity);\r\n\t\t_maxCapacity = maxCapacity;\r\n\t}\r\n\r\n\tstd::unique_ptr&lt;Value&gt; getCopy(const Key&amp; id) {\r\n\t\tauto it = _cache.find(id);\r\n\t\tstd::unique_ptr&lt;Value&gt; retval = nullptr;\r\n\t\tif(it != _cache.end()) {\r\n\t\t\tupdateHistory(it);\r\n\t\t\t\/\/ Copy the element\r\n\t\t\tretval = std::unique_ptr&lt;Value&gt;(new Value(it-&gt;second.first));\r\n\t\t}\r\n\r\n\t\treturn retval;\r\n\t}\r\n\r\n\t\/\/ Inserts new element. If the element already exists, it will be overwritten.\r\n\t\/\/ Returns a reference to the inserted object\r\n\tValue&amp; insert(const Key&amp; id, Value c) {\r\n\t\t\/\/ Check if the element is already existing\r\n\t\tauto it = _cache.find(id);\r\n\t\tif(it != _cache.end()) {\r\n\t\t\t\/\/ If the element exists, overwrite it\r\n\t\t\tit-&gt;second.first = c;\r\n\t\t\tupdateHistory(it);\r\n\t\t} else {\r\n\t\t\tif(_maxCapacity != 0 &amp;&amp; _history.size() == _maxCapacity) {\r\n\t\t\t\tevict();\r\n\t\t\t}\r\n\t\t\tauto end = _history.insert(_history.end(), id);\r\n\t\t\tauto newelem = _cache.insert( std::make_pair(id, std::make_pair(std::move(c), end ) ) );\r\n\t\t\tit = newelem.first;\r\n\t\t}\r\n\r\n\t\treturn it-&gt;second.first;\r\n\t}\r\n\r\n\tValue&amp;\toperator&#x5B;](const Key&amp; id) {\r\n\t\tauto it = _cache.find(id);\r\n\t\tif(it != _cache.end()) {\r\n\t\t\tupdateHistory(it);\r\n\t\t\treturn it-&gt;second.first;\r\n\t\t}\r\n\r\n\t\t\/\/ Create new empty element\r\n\t\treturn insert(id, std::move(Value()));\r\n\t}\r\n};\r\n#endif\r\n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>This is yet another example implementation of a Least Recently Used Cache written in C++11. It is not designed to be threadsafe, but thread safety could easily be reached by locking the same mutex at the beginning of all public methods. This Cache uses std::map and std::list as container types. A copy and move constructor &hellip; <a href=\"https:\/\/blog.vmexit.de\/?p=127\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Example Implementation of a Least Recently Used Cache in C++11<\/span> <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[],"class_list":["post-127","post","type-post","status-publish","format-standard","hentry","category-c"],"_links":{"self":[{"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=\/wp\/v2\/posts\/127","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=127"}],"version-history":[{"count":7,"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=\/wp\/v2\/posts\/127\/revisions"}],"predecessor-version":[{"id":139,"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=\/wp\/v2\/posts\/127\/revisions\/139"}],"wp:attachment":[{"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=127"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=127"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.vmexit.de\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=127"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}