2

I am trying to create a shared memory manager map in boost, along with a synchronization class for the map. I previously did something roughly the same with vectors and it worked fine. I get the error mentioned in the title while using maps. Am I doing this all wrong? Code in this repo

Synchronized class:

#include <iostream>
#include <boost/interprocess/allocators/allocator.hpp>
#include <boost/interprocess/containers/map.hpp>
#include <boost/interprocess/sync/interprocess_condition.hpp>
#include <boost/interprocess/sync/interprocess_mutex.hpp>
#include <boost/thread/lock_guard.hpp>

namespace node_cluster_cache {

    using namespace boost;
    using namespace boost::interprocess;
    using std::pair;
    typedef uint64_t int64;
    typedef uint32_t int32;
    typedef std::array<int32, SIZE> msg_arr;
    typedef std::array<char, ACCESS_SIZE> access_name;

    template<class T>
    class SynchronizedMap {
    public:
        typedef allocator<T, managed_shared_memory::segment_manager> allocator_type;
        typedef lock_guard<interprocess_mutex> lock;
        
    private:
        map<access_name, T, allocator_type> _map;
        mutable interprocess_mutex io_mutex;
        mutable interprocess_condition wait_condition;

    public:
        SynchronizedMap(allocator_type alloc) : _map(alloc) {};
        void insert(const access_name name, T msg_part) {
            lock _lock(io_mutex);
            _map.insert(pair<access_name, T>(name, msg_part));
        }
        int32 size() const {
            lock _lock(io_mutex);
            return _map.size();
        }
        bool empty() const {
            lock _lock(io_mutex);
            return _map.empty();
        }
        void clear() {
            lock _lock(io_mutex);
            _map.clear();
        }
        bool erase(access_name name) {
            lock _lock(io_mutex);
            return _map.erase(name);
        }
        bool erase(typename map<access_name, T>::iterator it) {
            lock _lock(io_mutex);
            return _map.erase(it);
        }
    };
};

Message structure:

struct Message {

        int64 bit_no;
        int64 pid;
        int64 part_no;
        int64 size;
        msg_arr data;

        Message() {}

        Message(int64 bn, int64 p, int64 pn, int64 s, msg_arr& d) {
            bit_no = bn;
            pid = p;
            part_no = pn;
            size = s;
            safe_copy(data, d);
        }

        Message(int64 bn, int64 p, int64 s, msg_arr& d) {
            bit_no = bn;
            pid = p;
            part_no = 0;
            size = s;
            safe_copy(data, d);
        }
    };
template<size_t N>
    static inline void safe_copy(std::array<int32, N>& dst, std::array<int32, N>& src) {
        #undef min
        std::copy_n(src.data(), std::min(src.size(), N), dst.data());
        dst.back() = 0;
    }

Create and used like:

this->shmem = new managed_shared_memory(create_only, "node_cluster_cache", SHMEM_SIZE);
this->alloc_inst = new SynchronizedMap<Message>::allocator_type(this->shmem->get_segment_manager());
this->cache_map = this->shmem->construct<SynchronizedMap<Message> >("data_vector")(*(this->alloc_inst));
msg_arr test{ 123,456,789 };
access_name name = { 't', 'e', '\0' };
int64 a = 1;
int64 b = 2;
Message c(a, b, 4, test);
this->cache_map->insert(name, c);

The error itself:

Severity    Code    Description Project File    Line    Suppression State
Error   C2664

'node_cluster_cache::SynchronizedMap<node_cluster_cache::Message>::SynchronizedMap
(const node_cluster_cache::SynchronizedMap<node_cluster_cache::Message> &)': cannot convert argument 1 from 

'boost::interprocess::allocator<T,boost::interprocess::segment_manager<CharType,MemoryAlgorithm,IndexType>>' to 'const 

node_cluster_cache::SynchronizedMap<node_cluster_cache::Message> &' node-cluster-cache
C:\boost_1_74_0\boost\interprocess\detail\named_proxy.hpp   85  

t348575
  • 356
  • 3
  • 12

1 Answers1

0

Changing allocator_type to the value pair and defining a comparator for std::array<char, ACCESS_SIZE> solved the issue.

typedef pair<const access_name, T> value_type;
typedef allocator<value_type, managed_shared_memory::segment_manager> allocator_type;
struct cmp_str {
        bool operator()(access_name a, access_name b) const {
            for (int i = 0; ; i++) {
                if (a[i] != b[i]) {
                    return a[i] < b[i] ? -1 : 1;
                }

                if (a[i] == '\0') {
                    return 0;
                }
            }
        }
    };
map<access_name, T, cmp_str, allocator_type> _map;
t348575
  • 356
  • 3
  • 12
  • It really does sound a lot like everything could be 10x simpler. Why are the arrays of int32, but still "behave" as if they should be nul terminated strings? Are you sure you don't just want `wchar_t`, and then `wstring` with the allocator? You again made everything [new/delete allocated](https://stackoverflow.com/questions/65233816/boostinterprocessmessage-queue-no-message-received-in-second-process#answer-65243735:~:text=Don't%20use%20new%20or%20delete) which maximizes bug potential. Maybe you could describe what you want to achieve and see whether we can help you think of simpler ways – sehe Dec 18 '20 at 19:50
  • @sehe I'm trying to make a caching system that all child process of a nodejs server can use as a simple unified cache as a node native addon instead of running redis or something else of the sort. I thought to convert all my data into int32 buffers might simplify storage. (note some of the cached data might be text, binary data from images or videos, etc etc) – t348575 Dec 20 '20 at 04:27
  • What's the scale? Can there be fixed limits (this will vastly improve performance) and is the key type (i.e. `access_name` here) always going to be POD-like? Those are all factors that would make it easy to vastly simplify the data structures for shared memory and improve the performance. – sehe Dec 20 '20 at 15:23
  • Oh I now noticed that `access_name` declaration was buried in the first code snippet. So, POD. – sehe Dec 20 '20 at 15:30
  • @sehe instead of using the message structure shown above I am now instead using another allocator to place a vector as the value for the key-value of the map. Vector because I cannot be sure how large the data to be cached will be. Would using N constant sized arrays with the same key name provide superior performance compared to using a vector ? [code](https://github.com/t348575/node-cluster-cache) – t348575 Dec 20 '20 at 17:08
  • I wouldn't think so, but if you have, say, 90% of values <1024 then you could optimize for that (see e.g. `boost::container::small_vector`) – sehe Dec 20 '20 at 19:49