I am thinking like in terms of Ethereum and it's gas idea. The gas is how much it costs per unit of work within the Ethereum runtime/vm so to speak. I don't remember where in the source it is but I checked it one time and basically, in the virtual machine they are executing the lowest-level instructions of the platform, and just before or after the instruction is evaluated, a counter is incremented so as to measure the "gas" being used. So each instruction increments a counter.
How would you do this for assembly? What is the ideal approach for it? You could have every instruction followed by a counter increment. Sort of like this:
_start:
mov eax, 4
inc COUNTER
mov ebx, 1
inc COUNTER
mov ecx, mensagem
inc COUNTER
mov edx, len
inc COUNTER
int 0x80
inc COUNTER
mov eax, 1
inc COUNTER
int 0x8
But that seems a bit hardcoded. Is there instead some special assembly instruction you can call to tell the CPU to count somehow? Or if not, do you need essentially to have a VM interpret your assembly/opcodes, and inject the counter between each instruction call? What is the standard approach here? How do you do this in a performance-optimized way?
In addition, this needs to be done on a per-user basis, so some data is probably going to need to be passed around for the execution context so we know who to charge for these specific instructions. If there is a built in performance assembly instruction, can it be used in this case?