1

I made a mistake working on nodejs in the beginning by not utilizing Redis or Memcacheor other memory storage systems. Now, it's far too late to be rewriting everything to accommodate and correlate my code within those API's.

However, I just recently found out about forking processes and how beneficial they can be; especially since I'm working on a gameserver.

The problem I have is: The memory is not shared between cores in nodejs.. until I found a TCP memory sharing module called Amensia.

With all that said, I have some question about it pertaining to nodejs and tcp in general:

1) The maximum size of a TCP packet is around 64k, so when using this module I can only share data up to 64k in size?

2) I use a global GAMES and users object to store player data. These objects are updated when a player moves in a map (x,y positions) and upon other actions. Would sending all this data across TCP derive into a bottleneck?

Community
  • 1
  • 1
NiCk Newman
  • 1,464
  • 3
  • 19
  • 43
  • 1
    *Sharing* with this module means duplicating the data to different processes. Is that what you'd want? Furthermore, I think it can't be too complicated to abstract your store/fetch mechanisms to be able to use Redis/Memcached/etc... – Tobi Sep 04 '15 at 15:24
  • Good point.. I was thinking about that just now. I would say heck no because these temporary objects can hold hundrends of player data and for them to be tossed around the TCP protocol is just asking for trouble, right? Maybe the Redis/Memcache is what I have to do... It's just, over 20k lines in, it's going to be a PITA. – NiCk Newman Sep 04 '15 at 15:29
  • 2
    Probably the store/fetch code will be 100 lines. I strongly recommend Redis – Tobi Sep 04 '15 at 15:36
  • 1
    @NiCkNewman did I get you correctly majority of your processes run on the same localhost? If yes, then might make sense to avoid TCP-overhead by using ZeroMQ / `inproc://` transport class and let you processes "discuss" through smart (scaleable) Formal Communication Patterns. I come from low-latency corner, so a bit deformed in this direction, where low-memory footprint and each nanosecond count, so sorry if your gaming environment has other priorities and/or OS capabilites disallow to make some use of this approach :o) – user3666197 Sep 18 '15 at 04:18

2 Answers2

3

A minimum overhead approach

Equip all your localhost forked processes with a inter-process smart-messaging layer.

This way your "sharing" might be achieved in both abstract meaning and ( in ZeroMQ case very attractively ) in literally exact meaning, whence ZeroMQ allows you to avoid data duplication by a shared buffer ( a ZeroCopy maxim ).

If your OS allows IPC:// and inproc:// transport class are almost overhead-less and inproc:// even does not ( thanks to the great architecture thoughts of the ZeroMQ team ) require _any_additional_ thread ( CPU/RAM-overheads ) once invoked via ZeroThread-context( 0 )

An approach even less subject to overhead ( if your app fits nanomsg )

In case ZeroMQ seems too powerful for your particular goal, may be interested in it's younger sister Martin Sustrik, co-father of ZeroMQ has spun off - nanomsg which also has node.js port available

Where to go for more details?

A best next step you may do in ether ZeroMQ / nanomsg case for this is to get a bit more global view, which may sound complicated for the first few things one tries to code with ZeroMQ, but if you at least jump to the page 265 of the Code Connected, Volume 1, if it were not the case of reading step-by-step there.

The fastest-ever learning-curve would be to have first an un-exposed view on the Fig.60 Republishing Updates and Fig.62 HA Clone Server pair for a possible High-availability approach and then go back to the roots, elements and details.

Sample scenario

If you fall in love with this mode-of-thinking, you would love Martin Sustrik's blog posts - a smart man, indeed. It is worth the time to at least get inspired by his views and experience.

halfer
  • 18,701
  • 13
  • 79
  • 158
user3666197
  • 1
  • 6
  • 43
  • 77
  • I should of read about scaling and IPC before starting on my node app. Hopefully it's not too late, I got a lot of rewriting to do. – NiCk Newman Sep 18 '15 at 05:52
  • 2
    Good luck, NiCk, nevertheless, going low-overhead, db-less, you will be well ahead of the "conventional" crowd with the nano / Zero smart-messaging. That's IMHO **worth your sweat and tears**, Churchill would say... – user3666197 Sep 18 '15 at 06:39
1

1) You should not have any problems with TCP packet size. Node will buffer/queue your data if it's too big and send them when the OS gives it a writable socket's file descriptor. You may hit performance issues only if you are writing more then your network bandwidth per second. At this point Node will also use more RAM to queue all this messages.

https://nodejs.org/api/net.html#net_socket_buffersize

2) Most games use TCP or UDP for real time communication. It can be a bottleneck, as anything else (RAM, CPU, bandwidth, storage) can. At some point of stress, one or more resources will end/fail/perform badly. It's generally a good practice to use an architecture that can grow horizontally (adding more machines) when all optimizations are done for your bottleneck and you still need to add more simultaneous users to your game server.

https://1024monkeys.wordpress.com/2014/04/01/game-servers-udp-vs-tcp/

You'll probably use TCP to connect to a Redis server (but you can also use a unix socket).

If you only need inter-process communication (and not inter-machine), you should take a look at the "cluster" Node.js core module. It has built-in IPC.

fermads
  • 26
  • 2