When a thread migrates, it keeps its stack and register, thus any data contained within this can be used in the destination process (correct me if I’m wrong).
Only a slight correction, at least my implementation makes stack used by previous processes completely inaccessible. If the stack from previous processes was writable, then a bug in the current process could overwrite important data in the previous process and a the bug would spread across processes, which is not what we want. Theoretically, I guess you could just map the stack pages read-only instead of completely inaccessible, but then you might be sharing security-critical data, so at least for now I require that all data be passed via registers. For larger data amounts, shared memory can be used, but that of course increases the complexity of a given implementation somewhat. This would be a limitation that falls under the ‘loss of flexibility’ that I mentioned in the post.
However, how does the thread find process-specific addresses and handles (e.g. a mutex)? […] Would there need to be a data structure stored in a fix offset in memory that contains the destination address of the receiving process?
You’d use global variables of some kind, yeah. Since each thread enters at _start, their addresses are known by the compiler so you don’t really need anything special, it’s largely just regular multithread programming. For instance, in the filesystem server that I’m writing, I just have big global array that’s as large as the maximum number of concurrent processes (which is known statically) that maps the process ID to a list of open file handles. Since _start knows which process the migration came from (the process ID and thread ID are set by the kernel) the lookup to file handles is practically speaking just one atomic load of extra overhead.
Only a slight correction, at least my implementation makes stack used by previous processes completely inaccessible. If the stack from previous processes was writable, then a bug in the current process could overwrite important data in the previous process and a the bug would spread across processes, which is not what we want. Theoretically, I guess you could just map the stack pages read-only instead of completely inaccessible, but then you might be sharing security-critical data, so at least for now I require that all data be passed via registers. For larger data amounts, shared memory can be used, but that of course increases the complexity of a given implementation somewhat. This would be a limitation that falls under the ‘loss of flexibility’ that I mentioned in the post.
You’d use global variables of some kind, yeah. Since each thread enters at
_start, their addresses are known by the compiler so you don’t really need anything special, it’s largely just regular multithread programming. For instance, in the filesystem server that I’m writing, I just have big global array that’s as large as the maximum number of concurrent processes (which is known statically) that maps the process ID to a list of open file handles. Since_startknows which process the migration came from (the process ID and thread ID are set by the kernel) the lookup to file handles is practically speaking just one atomic load of extra overhead.