Such a great initiative! Thanks!
The best way to do that is to actually see them work.
Agreed, this is the best way to evaluate someone.
But, I need to challenge the quote to some extent because I don’t think we should do this based on some of the arguments it brings. In particular:
Hire them for a miniproject, even if it’s for just twenty or forty hours. You’ll see how they make decisions. You’ll see if you get along. You’ll see what kind of questions they ask. You’ll get to judge them by their actions instead of just their words.
Generally speaking, yes. But, I want to highlight that this is a generalization, and all mini-projects won’t give you the promises above. Moreover, in practice, this is highly dependent on what the mini-project is, and will also vary greatly from candidate-to-candidate. In other words, not all mini-project can show the data points we want.
For example, suppose the mini-project is about adding a feature to Open edX platform, and the candidate has an extensive experience with Python/Django. In that case, there is a chance we won’t see the person dealing with ambiguity, finding the candidate find answers, how the person asks questions and communicate, how learning new abilities go, etc. But, on the other hand, we could evaluate the same candidate on the same criteria if he was working on a project outside of his comfort zone.
You can even make up a fake project.
From what I understand of fake projects, this is even worse. Mock projects will are designed from an idealized, abstracted idea, of what the ideal candidate should do. In the long run, these fake projects will become unrealistic scenarios that are testing unrealistic candidates. In practice, I think the data points we will get with fake projects are the same as if we simply hammer the candidates with coding problems, that is, we are measuring how well they prepared themselves for the interview. It is a valid data point, but not sure if it is what we are looking for.
Finally, the idea of comparing the knowledge work interview process with the car industry interview process is flawed because the level of ambiguity and skills required are different. For example, I would like to know how they would apply the same strategy for hiring Car Engineers.
With all that said, I am not against trying this. I think it is an excellent initiative because of the drawbacks of the trial period that Braden and others already mentioned. But let us compensate for the risk it brings, in particular:
- A more robust interview process, one that doesn’t go with the benefit of the doubt, to compensate for the lack of the safety net of the trial period. And,
- A constant feedback loop, whenever something goes south in the mini-project or after, we revisit (1.) to find what could have captured those issues.
TL;DR:
I am in favor of trying this. But, I don’t entirely agree with the quote, mainly because we will lose the safety net of the trial period since the mini-project confidence level is lower than the trial period in terms of the data points it offers. And, it is a hard problem to get great mini-projects from the real world, and even harder to design fake ones. But, if we compensate for the risks, I think it will significantly improve our hiring strategy.