AMX/Small bytecode is a block chunk of memory, split into one cell (4 or 8 bytes) per opcode and another for each parameter. Each opcode takes one parameter, except for debug opcodes which can take multiple parameters.
There are four ways of executing this bytecode.
Normal: Each cell is read and interpreted at face value. Jumps/calls are relative addresses, opcodes are interpreted through a huge case/switch table.
GCC/Linux: GCC has a label addressing feature, so beforehand all of the opcodes are browsed and relocated to be physical addresses of a big list of labels... this means opcode interpretation is direct instead of a switch statement, and gives about a double speed increase on Linux.
ASM: The assembly method is the GCC method, except opcode interpretation is in x86 assembly instead. The opcodes are physical address jumps into assembly procs that handle the parameters.
JIT: The JIT is something quite different. It pre-processes the bytecode and does AOT (ahead-of-time compilation):
1. Reads an opcode (say, PUSH.PRI)
2. Copies a few assembly instructions which perform this to garbage data.
3. Copies the opcode parameters over the garbage.
4. Finds the next opcode and goes back to 1.
5. When done, it has destroyed the original bytecode and replaced it with native x86-assembly.
It also relocates all jumps/switches/etc to physical addresses. Further execution is simply jumping into the code, rather than using amx_Exec()'s bytecode interpreter.
This means the JIT takes all of the AMX bytecode and outputs optimized assembly which accomplishes the same thing. And because it has NO interpreted jumps or opcode switched (unlike the others) it's lightning fast. It also ignores all debug opcodes when compiling for speed - on AMXx I wrote a feature where you can have it keep these so you can debug properly, but it's optional.
On my tests the JIT was usually 10-12 times faster than normal bytecode.
KWo asked me to clear this up, I hope it helps