Once again I’ve been working with the academic job security simulation again. Yesterday I’d finished altering the research funding model so that our poor agents no longer lived in a world of government largesse where population increases are always matched by an increase in funding to keep grant acceptance rates at 30%.
After tweaking things a lot last night and earlier today, I found that a funding level set at an initial 30% with a 2% increase per timestep led to research output levels very close to the previous version of the model. The proportion of grants funded slowly drops over the course of 100 timesteps, heading from that starting 30% down to about 17% at the end of an average run.
I also added a simple retirement mechanism to this version: after 40 semesters, agents start to think about retirement and have a fixed chance (20% at the moment) of leaving the sector forever. The result of this is a significant rise in the return-on-investment measure as the senior academics start to leave the sector; seems we had a lot of senior academics coasting along without producing much in the way of research! Compared to the previous version, the older academics produce significantly less research-wise — I’m presuming this is because the rich-get-richer aspect of the increasingly competitive funding environment leads to a larger proportion of failed applicants deciding to bow out of the rat-race altogether.
Having taken a brief look at all this I decided to test the feedback given to me at Alife XV. In the initial simulation, promotions had a huge positive impact on research output regardless of whether they were made entirely at random or based on research quality. Several people at the conference suggested that this may no longer be the case if I implemented a more constrained funding system.
So, I ran the simulation 800 times across a range of parameter values with limited funding and retirement mechanisms both turned on. I then used my old pal, the software called Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA), to crunch the numbers and come up with a statistical model of the agent-based model, and then ran that 41,000 times. The final output of interest is the total research produced across the agent population at the end of the simulation. The analysis looks like this:

Turns out my colleagues were onto something, which I expected (and hoped for, because otherwise that might mean the simulation might have some problems). In this version of the sim altering the chances of promotion for postdocs does little on its own, accounting for only 0.11% of the output variance. This factor interacts with the level of stress induced by impending redundancy, however, and this interaction accounts for 11.03% of the output variance.
The largest effect here is driven by Mentoring levels — the amount of research boost given to newly-promoted postdocs. Second-largest is the stress caused by looming redundancies. This is a significantly different result from the previous version of the simulation — I’ll run a parameter sweep of promotion levels later as well, to get the complete picture.
For the sake of completeness, here’s the graph of the main effects produced by GEM-SA:

Tomorrow I’m hoping to do a similar analysis, but this time leaving Mentoring at a lower, constant level and varying a slightly different set of parameters. My poor laptop needs a break for a little while, it’s pumping out crazy amounts of heat after all this number-crunching.
My other, larger task is to come up with a way to measure the overall human cost of this funding/career structure. I think I can make a good case at this point that job insecurity is not great for research output in the simulation, given that across many thousands of runs I’ve yet to find a single one in which insecure employment produces more research for the money than permanent academic jobs. I’d to be able to compare scenarios in terms of human cost as well, so perhaps taking a look at total redundancies after 100 semesters as the final output for some analyses might give me some ideas.
That aside I think I’ve made a decent start on an extension of the conference paper. Thanks to all those who came to the talk in Mexico and gave me some useful feedback!