I personally think the change from master & slave was kind of silly, as far as I’m aware, it was a bunch of people with no background in CS who thought the application of the term to something that has neither race nor agency was an insult to black people.
But I digress. It led to better guidelines in the Linux kernel, which I think are useful. You should tailor the terms you’re using to the specifics of the task. If you have a master process that only has outward interfaces through the slave processes, you could use the term ‘director’ and ‘actor.’ if the master process is managing slave processes which compete over the same resources, you can use the terms ‘arbiter’ and ‘mutex holder.’ If the slaves do some independent processing the master does not need to know the details of, you can use the term ‘controller’ and ‘peripheral.’
Basically, use a term that is the most descriptive in the context of your program.
Edit: also, I don’t know why no one mentions this, but you can also use master/servant. Historically, there wasn’t a difference between servant and slave, but in modern days there is, so it’s technically different, technically the same.




FP & OOP both have their use cases. Generally, I think people use OOP for stateful programming, and FP for stateless programming. Of course, OOP is excessive in a lot of cases, and so is FP.
OOP is more useful as an abstraction than a programming paradigm. Real, human, non-computer programming is object-oriented, and so people find it a natural way of organizing things. It makes more sense to say “for each dog, dog, dog.bark()” instead of “map( bark, dogs)”.
A good use case for OOP is machine learning. Despite the industry’s best effort to use functional programming for it, Object oriented just makes more sense. You want a set of parameters, unique to each function applied to the input. This allows you to use each function without referencing the parameters every single time. You can write “function(input)” instead of “function(input, parameters)”. Then, if you are using a clever library, it will use pointers to the parameters within the functions to update during the optimization step. It hides how the parameters influence the result, but machine learning is a black box anyway.
In my limited use of FP, I’ve found it useful for manipulating basic data structures in bulk. If I need to normalize a large number of arrays, it’s easy to go “map(normalize, arrays)” and call it a day. The FP specific functions such as scan and reduce are incredibly useful since OOP typically requires you to set up a loop and manually keep track of the intermediate results. I will admit though, that my only real use of FP is python list comprehension and APL, so take whatever I say about FP with a grain of salt.