A Different Trial Process for Recruitment

I was reading the book Rework lately (a book that I really like), and this little bit jumped out at me:

Test-drive employees

Interviews are only worth so much. Some people sound like pros but don’t work like pros. You need to evaluate the work they can do now, not the work they say they did in the past. The best way to do that is to actually see them work. Hire them for a miniproject, even if it’s for just twenty or forty hours. You’ll see how they make decisions. You’ll see if you get along. You’ll see what kind of questions they ask. You’ll get to judge them by their actions instead of just their words.

You can even make up a fake project. In a factory in South Carolina, BMW built a simulated assembly line where job candidates get ninety minutes to perform a variety of work-related tasks.

OpenCraft definitely shares the philosophy that it’s better to evaluate someone by working with them on real projects than using [interviews + resumes + cover letters + references]. And in fact we used to have a process fairly similar to what this book is suggesting. Maybe it’s time to go back to that model?

I think that we’ve all seen some downsides to our long trial period process lately, so I’m looking for a recruitment manager from one cell that would be interested in developing and trying out a much shorter trial process with me.

- Trial Period
+ Trial Project

What I have in mind is something like this:

  • We assign a ~20 hour task to the candidate. Ideally we would have this task lined up and ready even before we post job ads.
    • It can be a contribution to Open edX, Tutor, Workflow Manager, Ocim, or anything, as long as it’s open source.
  • We also assign a reviewer, and give the candidate guest access to Mattermost, so they can collaborate with their reviewer.
  • We don’t onboard them onto OpenCraft’s other systems. We only onboard people who pass the trial project.
  • The candidate must set up devstack before starting the task (if required), and then has a fairly tight deadline to complete the task (one week?).
    • The goal here is that they can complete the trial project without quitting their current job, but they’ll still have to be disciplined and show time management skills, as well as logging their hours etc.
    • The PR must be peer reviewed and at least “mergeable” by the deadline; it doesn’t have to actually be merged if there are external factors delaying that.
  • We look for candidates who can complete the devstack setup and trial project without needing a lot of hand-holding, but who still show strong communication skills

But the details can be refined with whoever wants to try this out with me.

Problems I’m trying to solve are:

  • The trial period process is a barrier that keeps some great people from applying, and in some cases it can be a bad experience for people who don’t pass.
    • With the trial project, people don’t have to make a big commitment (like quitting their job) in order to be considered, and even if they don’t pass they’ll still end up with an open source contribution under their name and extra money.
  • Onboarding, mentoring, and offboarding people who don’t pass the trial period wastes a ton of time, not to mention the impact to the project of switching assignees and sometimes even redoing the work.
  • When someone is really good (which is what we’re looking for) it’s often quite obvious right away
  • With the current process, people in their trial period often get assigned only small tasks which makes it hard to evaluate their work.
  • The trial period process makes recruitment decisions take a long time (months), so our capacity and our workload are often out of sync.

So, does someone want to try this out?


Ticket for this post: MNG-2410

11 Likes

+100 to this. @jvdm, you’re set to become Recruitment Manager for Serenity next sprint. Would you like to take @braden up on it?

@braden I would love to try new things as recruitment manager for sure, I’ve been doing a lot of recruitment lately and I do feel we should do better, it’s very frustrating to get to know amazing people and then seeing then drop during the process.

For me this is always the hardest part, we’ve been struggling with tasks to newcomers in the past sprints (I hope this is just a Falcon problem) and having a set of nice-to-have tasks that are only separated for candidates could get tricky, but for sure a great idea, I would love to get to that point

1 Like

Definitely worth trying! If we could shorten the time to decide without reducing the quality of our selection, that would be ideal. Imho with this approach, since we’ll have less time for course corrections, we’ll need to be very selective, and avoid giving the benefit of the doubt. It is often obvious when the candidate is good – but that also means we’ll need to not accept the ones which are more average or for which we aren’t 100% sure.

Also, at least at first, we should still conduct a review after two months, to make sure we didn’t get it wrong? It could be a performance review instead of a end-of-trial review.

I agree, this is a great ideae.

Agree here too, but to do this, we have to be 100% clear on our acceptance criteria for the trial project, both internally and with the candidate. These projects will be tricky to construct, and so we may actually have to use “fake” projects sometimes to keep them fair and worthy of evaluation. They’ll need:

  • strict time allowance
  • availability of reviewers
  • no external blockers
  • fully specified – hmm, on second thought, this would mean we can’t gauge their capacity for ambiguity, which is pretty important. Worth discussion.

And there are ways we can be flexible, at least until the project itself begins. Things like ensuring the candidate can decide when the project begins. – so they can schedule this for a period when they’ve got enough time to dedicate to it.

:+1:

Thank you for coming up with this idea, @braden! :slight_smile:

It will be important to clearly mention what will be reviewed, what the possible outcomes are, and how it is different from the end-of-trial review.

Love the idea! We should give it a try. :slight_smile:

I’m very much in favor of this idea as well. Our trial process was similar to this proposal in Opencraft’s early days and I think it worked quite well.

1 Like

Yes, and the advantage this method has over the original one is we won’t be waiting on edX to review our applicants’ work.

1 Like

I absolutely support the idea!

I had an interview and did interviews pretty similar to this back in time ( and that was a positive experience from multiple angles).

As a candidate, the stress factor was lower than in a situation where someone leaves his 9-5 job. Time can be self-managed, and the feeling that there is a safety net is pretty comfortable.

From an interviewer’s point of view, it was really easy to decide who would fit and who not. The possibilities of tasks and aspects we can look for are almost infinite and we can set up the task similar to our usual day-to-day tasks where communication is needed with the stakeholder, there is some uncertainty that needs clarification, and so on. (I don’t mean to make the candidates sweat, but set up a realistic fake project if we have no real one).

This sounds like a great idea! What I am worried about though is if we can reasonably stress the same factors we do right now. For instance, this gives us an idea of how the developer would work is relative isolation while following a very different process then we do regularly. However, it might not give us as good an idea of:

  • how well this person does as part of the larger team
  • how well they can handle our sprint process
  • devops stuff
  • how good they are at reviews
  • how good they are at smaller tasks

I feel like we might still benefit from a shorter trial period of 1-2 sprints after candidates pass this step so we know for sure.

1 Like

@kshitij I know, but there is no perfect process and I suspect that something like this will be “good enough” for an evaluation. Plus, it’s unlikely that someone would demonstrate strong communication on Mattermost, as good questions about a task, implement it well, work well with the reviewer, get it done before the deadline, and then end up being bad at code reviews or terrible at smaller tasks, or something like that. And if there is room for improvement, it’s likely that they’ll be able to improve it with mentoring, if they’re strong on all the fundamentals like communication, time management, and technical skills, which this trial project should test.

2 Likes

Such a great initiative! Thanks!

The best way to do that is to actually see them work.

Agreed, this is the best way to evaluate someone.

But, I need to challenge the quote to some extent because I don’t think we should do this based on some of the arguments it brings. In particular:

Hire them for a miniproject, even if it’s for just twenty or forty hours. You’ll see how they make decisions. You’ll see if you get along. You’ll see what kind of questions they ask. You’ll get to judge them by their actions instead of just their words.

Generally speaking, yes. But, I want to highlight that this is a generalization, and all mini-projects won’t give you the promises above. Moreover, in practice, this is highly dependent on what the mini-project is, and will also vary greatly from candidate-to-candidate. In other words, not all mini-project can show the data points we want.

For example, suppose the mini-project is about adding a feature to Open edX platform, and the candidate has an extensive experience with Python/Django. In that case, there is a chance we won’t see the person dealing with ambiguity, finding the candidate find answers, how the person asks questions and communicate, how learning new abilities go, etc. But, on the other hand, we could evaluate the same candidate on the same criteria if he was working on a project outside of his comfort zone.

You can even make up a fake project.

From what I understand of fake projects, this is even worse. Mock projects will are designed from an idealized, abstracted idea, of what the ideal candidate should do. In the long run, these fake projects will become unrealistic scenarios that are testing unrealistic candidates. In practice, I think the data points we will get with fake projects are the same as if we simply hammer the candidates with coding problems, that is, we are measuring how well they prepared themselves for the interview. It is a valid data point, but not sure if it is what we are looking for.

Finally, the idea of comparing the knowledge work interview process with the car industry interview process is flawed because the level of ambiguity and skills required are different. For example, I would like to know how they would apply the same strategy for hiring Car Engineers.

With all that said, I am not against trying this. I think it is an excellent initiative because of the drawbacks of the trial period that Braden and others already mentioned. But let us compensate for the risk it brings, in particular:

  1. A more robust interview process, one that doesn’t go with the benefit of the doubt, to compensate for the lack of the safety net of the trial period. And,
  2. A constant feedback loop, whenever something goes south in the mini-project or after, we revisit (1.) to find what could have captured those issues.

TL;DR:

I am in favor of trying this. But, I don’t entirely agree with the quote, mainly because we will lose the safety net of the trial period since the mini-project confidence level is lower than the trial period in terms of the data points it offers. And, it is a hard problem to get great mini-projects from the real world, and even harder to design fake ones. But, if we compensate for the risks, I think it will significantly improve our hiring strategy.

1 Like

Yes, I didn’t mention this in my original post but I definitely think the initial interview screenings would have to be a bit harder to pass, before people get to the trial project. But at the same time we have to be careful because this tends to select for people who are good at doing interviews, which is not a skill that will come up in the job at all.

Trial period or not, if someone is really not working out, and not listening to feedback, we can stop working with them.

BTW, previous jobs I’ve worked at didn’t have trial periods but do have a “probationary period” when you start. In my home jurisdiction, employees can be fired “without cause” (basically for any reason) and with no notice (or further pay) anytime in their first three months. OpenCraft’s contract is actually currently much more generous than this and gives a significant notice period from day one. If we proceed with this, I would be in favor of such a probationary period model (you can be terminated with no notice in the first X month[s]) which provides us more of a safety net.

Well, as mentioned in this thread, we used to do exactly that. And that’s also why I suggest that we have a good project lined up before even posting the job ad.

1 Like

The only issue I see with this is that offboarding might become too painful in this case. Currently, newcomers don’t get access to ansible-secrets, configuration-secure, and other hard-to-rotate credentials, which makes offboarding easier.

We can either limit access during the probation period (which might severely limit the work available, especially in DevOps heavy sprints) or come up with managed credentials using something Vault + Boundary (I’m talking in the long run here - not trying to push for more internal projects now :wink:).

:+1: for having well-specified projects (but not so much so that the uncertainty aspect vanishes).

To better evaluate candidates for how they’ll match with us, we can structure the test project like a mini-sprint: planning and discovering the requirements, execution & implementation, delivery - while evaluating time logging, communication, etc.

How about picking from this list? I think every single one of them would fit the bill, and Natalia would be very happy with us. After asking her at a meeting today, she already got the go-ahead for us to use core-contributor time on them. I think there’s enough there to keep our CCs busy and for us to hire 10 people by August 31st, thus solving all the company’s problems. ;)

1 Like

+1 on not picking fake projects – imho the best would be a project for a client, but for which we have at least 1 sprint of margin, to allow us to redo the work ourselves if it doesn’t work. It will be the closest to real conditions. If we can’t find that, then contributions to Open edX would definitely be much better to judge, and also more useful – especially now that we have core committers in the team who can review in a timely manner.

Btw for the contract signature, @gabriel is covering for my part of the recruitment process – so remember to include him if you test this before I come back (which would definitely be good!).

Two things about the contract part:

  • Before getting a candidate to work on a project, they should sign the contract
  • A clear timebox should be provided on the task/project before or simultaneously when assigning the work – and it should be said explicitly that it is the maximum number of hours we would compensate for this task, that they should stop their work once the timebox is reached, and push what they have then. (Feel free to extend it if it looks worth it then, but also be clear on what the new timebox is).

Also, another related idea we haven’t moved forward with – it could be worth sending a list of bounties to the candidates who look promising, but don’t have any experience contributing to open source? That could increase our pool of potential candidates.

3 Likes

The best in terms of evaluating the developer, yes. But it is a high risk, and thus a heavy responsibility, for an epic owner. I had to do this with SE-3741 and @jvdm and it turned out great because JV is awesome, but if he hadn’t been I’d’ve been up the creek.

Ths is to say: it must be up to epic owners to take on this responsibility or not. If nobody wants the risk, we can, and should, fall back to community contributions.

1 Like

@adolfo If there is work whose deadline is more than one sprint ahead, there should be zero risk for the epic manager? If it doesn’t work out, the PR can be discarded, the time counted as purely recruitment time, and the task rescheduled for the following sprint?

Ok, I see what you’re saying. But as an epic owner that will likely have to do this many times, allow me to be explicit about how I’d be comfortable doing it:

  • The trial member is not counted into any capacity calculations. (I didn’t think they would, but we should probably be explicit.)
  • The task in question is due at least one sprint ahead of the one during which the trial member will be executing it. (Again, to be explicit.)
  • The task must have been scoped and estimated to fit into one sprint, and the trial member must have one full sprint to do it. (To avoid giving tasks in the middle of the sprint.)
  • If the task spills over without a good reason, the trial member fails the trial and the reviewer becomes assignee automatically for the following sprint. By the second Thursday of the first sprint, it should be clear what will happen, and the reviewer should plan ahead accordingly. (This is to avoid making the epic owner scramble to find a replacement last-minute.)
  • In case of failure, the time spent by the trial member is not billed to the client. (As you suggest above.)
3 Likes