Category Archives: Academia

Joined the University of Glasgow

I haven’t posted here in awhile, but it’s not for lack of activity — as of 2 October 2017, I’m now a member of the MRC/CSO Social and Public Health Sciences Unit at the University of Glasgow.  The Unit, as it’s known around here, will soon celebrate its 20th year of core funding from the Medical Research Council, and produces research covering a broad range of public health themes.

I’m a part of the Complexity in Health Improvement programme, and will be helping the Unit develop a variety of research projects applying agent-based modelling techniques to complex problems in public health, including obesity, alcohol use, social care provision, and more besides.  I’ll be working closely with the Unit Director, Professor Laurence Moore, and other members of the Complexity programme to develop these projects.

In typical style the move to Glasgow was hectic to say the least, and despite starting our apartment search many weeks in advance my partner and I only managed to secure a flat five days (!) before my start date.  We were lucky enough however to find a very nice flat in Mount Florida, to the south of Glasgow city centre.  The city itself is great so far, with plenty of great places to eat and drink and lots of friendly people around, though the weather is pretty bad (and for the UK, that’s really saying something).

All in all, I’m really excited to be a member of the SPHSU now, and even after just a week there are plenty of interesting projects taking shape.  Watch this space from here on out, I’ll be making an effort to post more now that I’ve finally made the move!

Tagged , ,

Postdoc simulation analysis (spoiler alert: job insecurity is bad)

Once again I’ve been working with the academic job security simulation again.  Yesterday I’d finished altering the research funding model so that our poor agents no longer lived in a world of government largesse where population increases are always matched by an increase in funding to keep grant acceptance rates at 30%.

After tweaking things a lot last night and earlier today, I found that a funding level set at an initial 30% with a 2% increase per timestep led to research output levels very close to the previous version of the model.  The proportion of grants funded slowly drops over the course of 100 timesteps, heading from that starting 30% down to about 17% at the end of an average run.

I also added a simple retirement mechanism to this version: after 40 semesters, agents start to think about retirement and have a fixed chance (20% at the moment) of leaving the sector forever.  The result of this is a significant rise in the return-on-investment measure as the senior academics start to leave the sector; seems we had a lot of senior academics coasting along without producing much in the way of research!  Compared to the previous version, the older academics produce significantly less research-wise — I’m presuming this is because the rich-get-richer aspect of the increasingly competitive funding environment leads to a larger proportion of failed applicants deciding to bow out of the rat-race altogether.

Having taken a brief look at all this I decided to test the feedback given to me at Alife XV.  In the initial simulation, promotions had a huge positive impact on research output regardless of whether they were made entirely at random or based on research quality.  Several people at the conference suggested that this may no longer be the case if I implemented a more constrained funding system.

So, I ran the simulation 800 times across a range of parameter values with limited funding and retirement mechanisms both turned on.  I then used my old pal, the software called Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA), to crunch the numbers and come up with a statistical model of the agent-based model, and then ran that 41,000 times.  The final output of interest is the total research produced across the agent population at the end of the simulation.  The analysis looks like this:

gemsa1-cropped

Turns out my colleagues were onto something, which I expected (and hoped for, because otherwise that might mean the simulation might have some problems).  In this version of the sim altering the chances of promotion for postdocs does little on its own, accounting for only 0.11% of the output variance.  This factor interacts with the level of stress induced by impending redundancy, however, and this interaction accounts for 11.03% of the output variance.

The largest effect here is driven by Mentoring levels — the amount of research boost given to newly-promoted postdocs.  Second-largest is the stress caused by looming redundancies.  This is a significantly different result from the previous version of the simulation — I’ll run a parameter sweep of promotion levels later as well, to get the complete picture.

For the sake of completeness, here’s the graph of the main effects produced by GEM-SA:

gemsa2-cropped2

Tomorrow I’m hoping to do a similar analysis, but this time leaving Mentoring at a lower, constant level and varying a slightly different set of parameters.  My poor laptop needs a break for a little while, it’s pumping out crazy amounts of heat after all this number-crunching.

My other, larger task is to come up with a way to measure the overall human cost of this funding/career structure.  I think I can make a good case at this point that job insecurity is not great for research output in the simulation, given that across many thousands of runs I’ve yet to find a single one in which insecure employment produces more research for the money than permanent academic jobs.  I’d to be able to compare scenarios in terms of human cost as well, so perhaps taking a look at total redundancies after 100 semesters as the final output for some analyses might give me some ideas.

That aside I think I’ve made a decent start on an extension of the conference paper.  Thanks to all those who came to the talk in Mexico and gave me some useful feedback!

 

Tagged , , , ,

Returning to the postdoc simulation

I’ve been doing some tests on my postdoc simulation today. As suggested by colleagues at Alife XV I’ve implemented a funding system in which the total available funding increases at a lower rate than the population, leading to increased competition.

Total research output under these conditions does increase pretty significantly — however, return-on-investment remains negative, meaning we still would get more for our money by hiring half as many permanent researchers instead of postdocs. Postdocs are still the group producing the lion’s share of the actual scientific work, while permanent academics nearly all of their research time to grant-writing.
 
The return-on-investment is less negative than under the previous funding condition, but bear in mind the simulation currently doesn’t account for redundancy payments or training costs for new postdocs.  In these runs results were showing a return of -2.5 papers per unit of funding invested as compared to a postdoc-free scenario; in the unlimited-funding condition with the same settings, the figure averaged -3.6/unit.  Redundancies were higher in this condition, about 150 more each run than in the unlimited funding condition.  This could change significantly, however, depending on the final formula I use for year-on-year research budget increases, given that postdocs’ fates are directly tied to how much research money is available.
 
The big question is whether under this condition we still see no improvement in research output or return-on-investment when candidates for promotion to permanency are selected by quality rather than randomly. At this early stage there’s little difference — I’ve only done a few runs, but non-random promotions have not demonstrated a significant difference from random ones in either total output or ROI.  We’ll see if that changes when I do a larger sequence of runs.

My next test after that will be to try this version of the simulation with much longer runs to see if things stabilise at all, or whether the uncertainty introduced by the high-turnover postdoc population continues to drown out any attempts at rewarding high achievers with more grants.  We’re already looking at 50-year runs here, though, so if after two centures of terrible job security we see hugely better cost efficiencies I’m still not sure that’s a massive win for the postdoc side of things.  But I suppose that rests on whether you care about the human costs or not.

There’s a lot of tweaking to be done so these are very early days, but it’s an interesting first result.
Tagged , , , ,

Paper Submitted To Alife XV

I’m happy to report that I’ve recently submitted a first paper on the postdoc simulation I’ve been plugging on these pages for some time.  I’ve been working in collaboration with Nic Geard of the University of Melbourne and Ian Wood, my officemate at Teesside.

The submitted paper is titled Job Insecurity in Academic Research Employment: An Agent-Based Model.  Here’s the abstract:

This paper presents an agent-based model of fixed-term academic employment in a competitive research funding environment.  The goal of the model is to investigate the effects of job insecurity on research productivity.  Agents may be either established academics who may apply for grants, or postdoctoral researchers who are unable to apply for grants and experience hardship when reaching the end of their fixed-term contracts.  Results show that in general adding fixed-term postdocs to the system produces less total research output than adding half as many permanent academics.  An in-depth sensitivity analysis is performed across postdoc scenarios, and indicates that promoting more postdocs into permanent positions produces significant increases in research output.

The paper outlines our methodology for the model and analyses a number of different sets of scenarios.  Alongside the comparison to permanent academic hires mentioned above, we also look closely at unique aspects of the postdoc life cycle, such as the difficult transition into permanent employment and the stress induced by an impending redundancy.  For the sensitivity analysis we used a Gaussian process emulator, which allows us to gain some insight into the effects of some key model parameters.

The paper will be under review for the Alife XV conference very shortly, so I don’t want to pre-empt the conference by posting the full text here.  If — fingers crossed — it gets accepted, I’ll post a PDF as soon as it’s appropriate.  If you want a preview or are interested in collaborating on future versions of the model, please get in touch!

Tagged , , , , , ,

Science about Science: Does Promoting More Postdocs Help?

Just a brief one today — I’ve been playing with parameter settings on the funding/careers model, particularly the impact of postdoc promotions.  In the base scenario, postdocs (referred to as PDRs here: Post-Doctoral Researchers) have about a 15% chance of getting promoted to a permanent position.  Here’s a sample run at the base settings (which includes the mentoring bonus added last time):

r_mean_pdr2

I’ve finally worked out how to fix the legends on these graphs!  Now let’s compare to a scenario in which 50% of postdocs get promoted:

r_mean_pdr2

 

Note that the mean productivity of grant-holders (the green line) is overall a bit higher than in the 15% case.  The productivity of promoted postdocs (the orange line) also tracks higher over time than in the 15% scenario.

Now let’s try 100% promotion chance:

r_mean_pdr2

Here the productivity of grant-holders and promoted PDRs is higher than in either the 50% case or the 15% case.

So does this mean that promoting more postdocs is our ticket to a more productive research community?  Well, in this virtual academia it seems to help — but still we’re seeing a lower level of productivity than in the postdoc-free scenario.  Not to mention that there’s still quite a bit of statistical work here to be done to determine how significant these effects are — but it’s an interesting result from today’s work and one I hope to address in the paper, assuming that the analysis bears it out.

 

 

Tagged , , , ,

Science About Science: More Scenarios

Since my last post I’ve been doing a lot of work on cleaning up the simulation and adding some additional scenarios to the mix.  After some in-depth discussion with colleague Nic Geard, co-author of the 2010 academic funding model that inspired this work, we decided that a good starting point for this would be to compare a more basic growing population of permanent academics with a population that includes insecure postdocs.

So that led to me getting to work on re-working some things to allow for four possible scenarios:

  1. Core academic funding model as written by Nic and Jason
  2. Simple growing population of permanent academics
  3. Population which includes postdocs, in which research quality does not increase the chances of promotion for postdocs
  4. Population which includes postdocs, in which research quality does increase the chances of promotion for postdocs

This last scenario in particular is intended to investigate how things proceed if we have an optimistic view — we know exactly how good each postdoc is, and we hire only the very best 15% of the current crop during each iteration.  Those in favour of the current structure would most likely argue that competition for limited jobs allows the cream to rise to the top, so we need to investigate whether that assumption holds.

So for the purposes of this post, I’ve done a quick run of the sim for each of these scenarios.  Note that the previous model by Nic and Jason investigates the time-management aspect of grant applications much more deeply — right now I’m just focusing on the mean research output for different groups of academics under each scenario.

Scenario 1: Core Academic Funding Model

If you alter the parameter settings of my version of this model and turn off all my additions — growing populations, promotion mechanisms, and the postdoc system — you end up with a scenario that’s nearly identical to the original model by Nic and Jason.  The only major difference is that in my version the bonus in research quality given to grant-holders is 1.5 rather than 1.25.

So what we see is that grant-holders, as you might expect, have a massive advantage in terms of research productivity:r_mean

Grant-holders are sitting pretty at the top there, although their output fluctuates given that various researchers of differing levels of research talent are jumping in and out of the grant-holders club each semester.

(NB: I’m aware that the non-grant holders are invisible in this graph and the next — I’m working on it.  This is all a work-in-progress, it’ll get there in the end!)

Scenario 2: Growing Population

In the second scenario, I’ve added a mechanism which adds a few academics to the population each semester.  Their research quality and initial level of time investment into grant proposals is randomised.  As in the last post we’re living in a generous society here where research funding stays in step with the growing academic population — 30% of applicants are always funded, regardless of the population size.

Perhaps unsurprisingly, the results look nearly the same as in Scenario 1:

r_mean

Just like in Scenario 1, grant-holders do far, far better than the overall population, particularly those applicants whose grant applications have failed.

Scenario 3: Postdocs, Random Promotions

So now things start getting more bizarre.  In this scenario we introduce the postdoctoral system outlined in my previous posts.  Postdocs are added in proportion to the number of grants that have been funded in a given semester, with a bit of random variation to spice things up.  New postdocs are assigned contract lengths between 4 and 10 semesters.  For the first two semesters their research quality is lower to account for their adjustment period into a new post; similarly, their last two semesters also see a drop in quality due to the time they must devote to finding a new post.

At the end of their contract, postdocs have a 15% chance of being promoted into a permanent position.  That may sound harsh, but that’s actually slightly more generous than reality (the figures I’ve seen have it pegged at 12%).  Research track record doesn’t count in this scenario — this is a world where promotions are entirely a lucky coincidence (some would argue that this is broadly reflective of reality).  Once promoted, they’re now permanent academics and can apply for grants.

So here’s a sample run of the latest formulation of this scenario:

r_mean_pdr

Much like the last set of early results, we see a drastic drop in mean research output amongst permanent academics who are grant-holders, and postdocs don’t do very well in terms of productivity despite allocating 100% of their time to research.  Overall we see no benefit to research output of the population with the introduction of postdocs, and both permanent academics and postdocs see significant variability in their research output.  My interpretation is that the introduction of a randomised population of insecure researchers is massively disruptive — each semester we don’t know how good our postdocs will be, so their output is highly variable, and we also don’t know how good our promoted academics will be, so again we see fluctuations at that level too.

Scenario 4: Postdocs, Non-Random Promotions

This scenario is particularly intriguing to me.  Nic and I had wondered whether selecting only the very best postdocs from the crop for promotion each semester would improve the picture or not.  After all if we pick the best of the best and put them in a position to get grants and thus that juicy grant-holder output bonus, surely things will go much better for our virtual scientists?

Well… not massively:

r_mean_pdr

Now you’ll see in this run that actually both the grant-holders and postdocs appear to be doing a bit better in terms of research output.  Initially this seems good, but by the end of the simulation we see that the mean research productivity for the overall population is actually slightly lower than in the random promotions case!

At first blush this seems nonsensical, but if we ponder it for a moment I think it makes sense.  While the non-random promotions do mean that we get the best of the postdoc population promoted each semester, it still means we’re highly dependent on the whims of the random-number generator — if we get a few bad crops of postdocs, in other words, we just end up with more crappy academics, and our exacting knowledge of postdoc research quality hasn’t saved us from the disruptive influence of the constant influx of new people with highly variable research output and contract lengths.

Moreover, there’s no mechanism at present for postdocs to be mentored or to mature in their research abilities — once crappy, always crappy, in other words.  In real life people may argue that the trials and tribulations of postdoc life can allow young researchers to grow into more productive academics — so that’s another aspect we need to examine.

I’ve done a bunch more runs with different random seeds and seen variations in the output that seem to support these ideas, but I’m going to spare you the 18 other graphs.  Suffice to say that the graph above seems to indicate a lucky series of postdoc recruitment drives more than anything else.  Instead I’ll keep working at it and post more when I’m more clear on my interpretations of this scenario.

SURPRISE NEW SCENARIO: Non-Random Postdoc Promotions, With Mentoring Bonus!

Wow, what a day for you lucky people!  I’ve just decided to do a quick-and-dirty scenario where we give promoted postdocs in the non-random scenario a bonus to their research quality to attempt to simulate postdocs being mentored toward success by their superiors.  Surely we’ll see a change in the fortunes of our virtual scientists now?

Well… not really:

r_mean_pdr

In fact things look almost identical, with the exception of the overall mean research output hitting a plateau rather than dropping slightly at toward the end of the sim, as we saw above.

To be fair, however, the ‘mentoring bonus’ I gave out here was not outrageously large — effectively the promoted, mentored postdocs get a 25% bonus to research quality.  What if I double that to 50%, what do we get?

r_mean_pdr

Ah-ha!  At last, a very slight positive outcome.  Mean research output overall trends ever so slightly upward over the course of this simulation run, rather than plateauing or starting to fall as above.

But I think we’d have to admit here that this is a fairly minimal outcome considering a rather generous scenario — and it’s quite likely this won’t hold in every run and some other runs may show worse results depending on the feelings of the random-number generator.  It seems reasonably consistent at a quick glance — out of 10 runs I’ve just done for this scenario, 7 out of the 10 showed a similar tiny, tiny positive trend.

So what I’ve gathered from today’s work is that increasing the average research output of academics in a postdoc scenario requires some major work: we need to recruit only the very best postdocs; and we need to ensure they get mentoring of high enough quality that they are a full 50% better than they were during their postdoc days.  Even with these powerful tools, that’s still barely enough to overcome the disruptive impact of a fluctuating population of insecure overstressed young researchers.

In real life of course, we don’t have such a transparent method of evaluating research outputs and determining the best postdocs to hire — nor do we have a population of super-mentors who can massively improve the productivity of every single postdoc.  So, if we believe the underlying assumptions of this model, then perhaps we should start to think about whether insecure research posts are a good thing for science or not.

Of course there’s a human dimension here as well — over the many runs I’ve done with the postdoc mechanism running, most simulations top out around 500 active academics at the end of the simulation, and between 5-600 total postdocs hired over the 100 semesters.  Out of those we’ll see between 70-90 postdocs get promoted, while the rest all get the sack and leave academia forever.  Do we really want to be sending these vast numbers of PhD graduates out of the academy and lose all that potential research talent?  That seems like an incredible waste, and even more so when we see how difficult it is to get a positive impact on productivity out of this structure.

Next time: I’ll keep poking at this simulation and see whether these results hold up, and I’ll be doing some other comparisons on other measures, including total research output across different groups.  Early indicators: postdocs increase total research output, and research quality across the population becomes highly unstable.  More later.

 

 

Tagged , , , , ,

More Science about Science

After a good few hours working on the simulation yesterday — and by ‘a few’ I mean ’15 hours’ — I have things working in a more stable configuration now.  The original simulation I’m working from was structured around a stable population, but in this simulation I’m using a dynamic population — a very dynamic one, in fact, as postdocs shuffle in and out constantly.

This has meant that I’ve been working a lot on re-writing some of the code to facilitate the addition of postdocs to the virtual research community.  Yesterday I ended up learning some new skills when I found that I needed lists of agents that retained the order of the elements within, so that was an interesting opportunity to learn more about ordered dictionaries in Python.  Presumably I might be able to make use of those in future models too, so that’s very helpful.

So, at the moment we have a nicely dynamic population of simulated academic agents in which postdocs enter the population every semester as grants are disbursed to tenured academics.  Tenured academics spend their time doing research and applying for research grants; they learn from experience and change their time allocation strategies regularly to try to maximise their success in these arenas.  The simulation starts with 100 tenured academics, and after 50 years in a typical run we end up with about ~1200 academics in total, with about a third to a half of those being postdocs, depending on the parameter settings.

These results are based on a generous virtual society though, at least compared to reality: 25% of postdocs get promoted to tenured posts at the end of their contracts; research funding is available to about 30% of academics even as the population grows massively over the years; and tenured academics holding grants get a 50% boost to their research output.  Initially I had included a ‘management penalty’ to research quality for grant-holders, to account for the time spent line-managing postdocs and administering projects rather than actually doing research, but in this generous situation I left that penalty out completely.

So, in this relatively happy situation compared to the real world, do we see any productivity gain from the mass introduction of non-tenured, research-only staff?

Well… no, not quite:

r_mean_pdr

As you can see above, once postdocs are introduced we see a relatively precipitous drop in research productivity.  Grant-holders in particular suffer a great deal on this front, despite having that 50% research output bonus.  Tenured academics not holding grants (in purple) and failed grant applicants (yellow) also dip significantly, but then rebound slightly as they adjust their time allocation strategies between grant-writing and pure research.  Postdocs enter at a lower point and then settle at a middling level of productivity, necessitated by the lowered research productivity they experience at the beginning/end of their contracts.  Their output tends to be more ‘spikey’ in general, as they shuffle in and out of the population very frequently.  Toward the end of the simulation everyone begins to converge between the 0.3 – 0.5 range or so — and in this run we can see the postdocs just overtaking the grant-holders in productivity.

Another interesting aspect here is that in a no-postdoc situation there’s a reasonable positive correlation between research quality and grant disbursement — better researchers tend to get the money, in other words.  When postdocs are introduced that breaks down completely, and there’s little to no correlation between the two; in fact on more than a few runs I’ve seen slight *negative* correlations, this in spite of the fact that in the simulation research quality is used in the ranking of applications.

So — at this stage it seems like introducing a highly volatile, insecure population of researchers into the mix creates a large amount of uncertainty, reduces overall research output, and in general disrupts things significantly.  Even in a ‘generous’ research environment we see these problems clearly.

What about in a more challenging funding environment?  Let’s imagine we’re working in biology or something, one of those fields were grant applications only succeed 10-15% of the time, and money is scarce so permanent positions are even more difficult for postdocs to achieve:

r_mean_pdr

The population is much smaller, sustaining 605 academics in this particular run and just 96 postdocs — but the research output stats look extremely similar.  Grant-holders suffer a huge drop in overall productivity, punctuated by periods of high output when they’re holding that grant, and dipping again when they dump research time into grant-writing to try to get the next one.  Failed applicants and non-grant-holders still hover around the bottom edges, de-emphasizing research as they’re trying desperately to get research money through writing bids.  Postdocs, meanwhile, wobble around the 0.4 mark most of the time, never quite in post long enough to settle in  — and given that they’re not able to apply for grants, they never can benefit from that 50% bonus to output like the senior academics can.

Again these are early results and a very cursory analysis, but it seems like what’s happening here is pretty stable even with fairly significant changes to parameter settings (I’ve done many more runs on my own to check this).  This suggests that in order to escape these problems, future versions of the simulation will need to look at more drastic changes to the research career/funding structures in order to try to address these problems.

Next time, I’ll be adding some more analytical tools to the simulation, and developing some experiments to test alternative funding disbursement methods and career structures.  As ever please do get in touch with me if you have ideas or suggestions — I’m very keen to have more people to speak to about this kind of work!

 

 

Tagged , , , , ,

Modelling Research Careers: Very Early Results

I’ve managed to get that research careers simulation up and running today — a very early version, mind you.
Each time grants are disbursed to our simulated academics, the top 10 applicants receive a postdoc.  Postdocs work full-time on research and do not apply for grants.  At the beginning and end of their contracts (which range from 2-5 years long) their output is reduced significantly to account for stress caused by entering a new job or searching desperately for a new one, respectively.  Postdocs have a 10% chance of being made permanent at the end of their contracts; if they’re unlucky then they simply drop out of the system altogether (I haven’t implemented multiple contracts yet).
So what we have is a very volatile situation right from the beginning — we’ve got lots of people on short contracts, most of whom are under significant stress for part, or even all, of said contracts.  Postdocs are constantly being shuffled out of the system and replaced with new postdocs, so the research environment is being filled up with stressed-out people with highly variable levels of research talent — and talented ones are just as likely as crappy ones to be booted out the door at the end of their contracts.  Permanent academic jobs are in short supply, so most postdocs never get a chance to contribute to grant applications.
The results are rather more drastic than I anticipated.  Here are the results for mean research output from a quick run of the simulation including postdocs:r_mean
Contrast that with the below, which shows the mean research output from a run with the same initial conditions but without postdocs in the simulation:
r_mean
 
Mean research output across all categories, no postdocs: ~0.61
Mean research output across all categories, postdocs added: ~0.34
 
Given I wrote all this code in a day, these results are highly speculative at best — but I’m hoping that the final version will give us a decent representation of the impact of competitive funding systems and job instability on academic research quality.   At this point I’m just pleased to see it up and running!
There’s still a ton of work to do: double-checking everything, adding in detailed stats collection on the postdocs, then revamping the funding disbursement functions to tie grants and postdocs together explicitly so I can measure output by project/PI.  There’s a few other bits I really want to do, like implementing multiple contracts, etc.
I’ll keep posting progress reports as I go… please wish me luck!
Tagged , , , ,

Modelling Research Careers: early thoughts

As some of you know already, because I keep going on about it, or worse, trying to drag you into it, I’m hoping to kick off a major project on simulating the research career structure and its effect on scientific productivity.  Having done my time as a postdoc, like many of us, I’m pretty convinced that the current pyramid-scheme structure of academia is not only sub-optimal, but fundamentally damaging, particularly toward academics from marginalised groups.

My first attempt at building an early-stage model of research careers is taking inspiration from Geard and Noble’s paper on Modelling Academic Research Funding as a Resource Allocation Problem.  In this paper the authors construct an agent-based model in which simulated academics attempt to obtain grant funding — frequently a prerequisite for any kind of decent job security these days — by devoting a certain portion of their time to writing proposals for competitive funding bids.  Agents have an underlying research productivity level which influences the perceived quality of their proposal when it comes under review.  At the review stage, top-ranked proposals are funded, and funding is then given to agents (which manifests as an increase in their research productivity).  Agents produce research outputs according to their productivity, whether or not they are holding a grant, and how much research time they have available (given that some period of time must be spent writing grant proposals).

In the end the paper demonstrates that the current system of grant funding is inefficient — huge amounts of time are spent on obtaining grants, which takes away from research productivity, and since most grant proposals are unsuccessful we end up with a lot of time wasted.

What I’m proposing at this stage is to modify this framework to include agents who are on fixed-term research contracts.  Now, presenting a simplified version of the post-doc experience would require a few changes:

  • Agents should be on fixed-term contracts — in the UK about 2/3 of all research-active academics are on FTCs, so the model should reflect this
  • Many postdocs are given much more time to devote to research in general, being largely free of time-consuming teaching or administrative duties
  • Postdocs need to spend significant time during the end of their contracts looking for a new job
  • New postdocs may lose some productive time due to needing to acclimatize to their new working environment
  • Postdocs are often tied to specific projects, and their contracts live and die as the project does

At the moment I’m envisioning a version of the model where we add significant new elements to try and work postdocs into this:

  • FTCs can vary in length from 2 to 10 semesters — as do projects
  • FTC agents don’t contribute to grant proposals, nor do they submit proposals for review (in reality some do contribute, but at least here in the UK postdocs are not considered proper academics by the Research Councils and thus cannot apply)
  • When the postdoc first starts work, 30% of their time is spent adjusting to the new environment, getting to know people and the work that needs doing
  • When the postdoc’s contract is due to end, again they lose 30% of their time due to job-hunting, interviews, and general stressing out
  • When grants are disbursed, the top 10 funded projects are allocated a postdoc with a contract length matching the grant length
  • Postdocs add their research productivity to the academic holding the grant
  • When a postdoc’s contract ends, at the end of the current semester they’re given a 10% chance of being made permanent — allowing them to then conduct their own research programmes, apply for grants and get their own postdocs
  • Postdocs who don’t get made permanent can transfer to another project if one gets funded and needs a postdoc.  If that doesn’t work they drop out of the research population

At the end of a run — say, 100 semesters like in the original model — I’d be looking at overall research productivity, research productivity in postdocs vs permanent faculty, the career history of the postdocs, and the distribution of grant income across the population.

What I’d expect to see is an elite set of agents who started collecting post-docs early, then snowballed their way into a series of successful grants and even more postdocs, while the rest of the population flounders, and is at a serious disadvantage compared to faculty members on the exploiting-the-postdocs train.  As for the postdocs, only a tiny number would be made permanent and thus benefit from their efforts, while a large number would end up on multiple FTCs or dropping out of the population altogether.  All of this would be broadly reflective of reality.  If that were to happen then perhaps this model could provide a good platform for investigating alternative methods of organising research careers, and for examining how different funding disbursement methods affect the fate of postdocs.

What I’m hoping to get out of this in the main is a model which demonstrates the interplay between precarious employment in academia and our current competitive methods of disbursing funding.  Modelling research as a resource allocation problem fits this well, I think, because postdocs are placed under particular pressure to find their next posts in a limited time while being expected to produce substantial research output.

Now as I write this I’m very aware there’s a lot of things at play here and this model is already in danger of being over-complicated.  Even still there’s a number of other factors I’d like to try to address at some later stage, most particularly the impact of stress on research output (from failing to get a grant, worrying about job security, etc.) but let’s just see if this works at all first!

But first: please do chime in if you can and let me know what you think, where I’ve gone wrong, etc.

Tagged , , ,

Rethinking UK Research Funding: Presentation Slides

A bit more content on Wednesday’s Rethinking UK Research Funding meeting.  The organisers have just posted the speakers’ slides, so do check them out if you have a moment and are interested in the topic.

For my part I’ve started working on the framework for a simulation model of the impact of short-term contracts and researcher stress on productivity.  In the first instance we’ll be constructing a very simple model just so we have a system to play around with — later on we will gather data from surveys of real-world post-docs to give our simulated researchers more realistic strategies.  Then we’ll see how our agents go about coping with the stresses of trying to bid for funding while also trying to get themselves a job and some semblance of security.

Watch this space 🙂

Tagged , , ,