Tuesday, December 20, 2011


My strongest memory of New Zealand’s South Island, unfortunately, will be driving two fisted, white knuckled down narrow streets twisting over insanely steep mountains, and crossing one lane bridges. Did I mention New Zealand is a country where high octane fuel is available at every pump? I finally realized that the “keep left” signs weren’t meant for foreign tourists, but rather Kiwis gleefully taking shortcuts around right hand bends at 140 kph.
Q: Why did the Weka (a native bird that looks a bit like a long legged chicken) cross the road?
A: Unlike the chicken, the Weka didn’t have a good reason to cross the road, but like all good Kiwis he doesn’t feel comfortable unless risking life and limb on the highway.
Back to the Great Plains now, for the Xmas season, turning in final grades, and preparing for next semester’s classes. Have a great holiday season everyone!

Monday, December 19, 2011

Schooling again


It seems pretty clear that the term “Adaptive Management” has gone the way of “Sustainable Development” – it’s popular so everyone wants to do it, and as a result there is a proliferation of interpretations of AM. Rather than waste time arguing over whose interpretation of AM is the right and true path, Jaime McFadden and I tried to identify attributes of different interpretations, and then classify exemplars in the literature into a few, or in fact two, schools of thought. At the time, we had a 3rd category “other”, where we stuck everything that didn’t obviously fit in the other two. By nature I’m a lumper, not a splitter, so it causes me great pain, but I’ve concluded that there needs to be a 3rd school of thought on AM.
Credit for identifying this new school goes to Mike Runge, and the “Redefining Adaptive Management” symposium that we partially sat in on at the ICCB meeting. The key attributes are 1) having measurable objectives, 2) carrying out a management action intended to move the system closer to the objectives, and 3) effectiveness monitoring to determine if the system has in fact moved closer to the objectives, and if not 4) try something else. For the moment, I’m going to dub this the Foundations of Success school, after the consortium of international conservation organizations that put together the FoS umbrella, and the ICCB symposium. In the Artificial Intelligence literature they call this “trial and error learning”. I'll write some more about this later. 

Thursday, December 8, 2011

End of the story

I’m outta here, they are talking about effectiveness monitoring again. Although it is an interesting dataset - snares found per km walked in a forest park in Rwanda. Looks like they need to  control for observation effort - otherwise the ranger posts are attracting snares. 

Big Partnerships


Kim Lutz … or no, some other person talking about Environmental Flow Prescriptions: an adaptive partnership
This is a pilot project partnering with USACE on retiming flows from dams – gee, where have I heard that before.  They want to get both high peaks and subsequent low discharge periods as well. At least one of the projects has peak flows < 1 kcfs, so … although it looks like they include the Missouri River.
On the Savannah River, where to release, how much and when, what change to expect – phrased as a research project. They were surprised by fish response to a managed flow – warm water caused fish to move downstream – an example of learning! Trial and error, not AM.
On the Connecticut River they have management models of the system, a big one for the whole system, and a mini Stella model to interact with managers “hands on”. That’s cool. But no indication that the models make ecological predictions, or that they use them to predict the effects prior to choosing a strategy.
OK, so they've used this on some small rivers so far, not yet on the Missouri. 
Takehome lessons: translate between modelers and ecologists – OK, I’m not always easy to understand, I get it. Iterate stakeholder analysis – also a good idea.
Hmm, still not seeing any AM. 

Redefining Adaptive Management


Welcome to the first LIVE blog direct from the Epsom 3 room at the Skycity convention center in Auckland. This is a special session redefining Adaptive Management organized by Craig Groves and Jensen Montambault from The Nature Conservancy. Definitions are good, so I’m looking forward to adding a new school of thought to my pantheon. I’m sitting here with Mike Runge (USGS), waiting eagerly to hear how our lives will be different. Well, maybe that’s just me.
Mike has just offered a perspective that what non-decision theoretic AM people worry about are the unknown unknowns only – the surprises that are unanticipated.
Here comes Craig Groves.
Survey of AM from conservation measures partnership
Of 7000 projects, 5% of projects do the full cycle, although 2500 have plans.
Why? AM is too complicated, and there is no mandate from senior management.
Overcoming Barriers
Use risk and leverage to guide investments in AM. Invest in projects that are high risk, with potential to generalize to other projects. Need to have the best statistics to be able to say that things are actually happening the way. They have a nice little decision tree that leads to diagnosing when an experimental approach (AM?) would be needed.
Focus AM on addressing questions that managers need to answer – This seems obvious, but it isn’t clear that he means which decision to make. Mike says “Looks like evaluation monitoring”, and I agree. www.conservationgateway.org is the place to go for the details, apparently.
Stop reinventing the wheel – yes! There’s 60 years of literature on decision analysis! This is a good idea, collaborate with other agencies and analyze data across projects within the TNC – but not AM.
Get senior managers to support the idea – yep, hard to disagree with that. Another signal that it is evaluation monitoring in disguise, is that they “peer review” their plans for evaluating effects.
Summary
Not all projects need scientifically rigorous AM, some do not.
Training and tools matter but so does leadership
And we need more success stories.
It’ll be interesting to see how they define success? Getting around the Plan, Do, Check, Adapt cycle? He didn’t define AM :(. 

Tuesday, December 6, 2011

Congressing

I'm fortunate enough to be down in Auckland NZ this week for the International Congress on Conservation Biology. This is the first of the new bi-annual format meetings of the Society for Conservation Biology. So far, I've not been blown away by anything, but its fun to catch up with people. I've run into alot of people from my time in Oz, which is harder to do at conferences in North America. There was a good session yesterday on modelling future responses to climate change, which, for once, included a talk that expressed some skepticism of the utility of static species distribution models for this purpose (John Leathwick, NIWA). Walter Jetz (Yale), gave a remote talk describing his labs work on building models of every bird species on the planet - bold stuff (see www.mappinglife.org). There was some talk of testing predictions from these models, but no discussion of how much accuracy is enough, or of what these models would be used for. Luckily, I also saw Helen Regan's (UC Santa Barbara) talk where she laid a stochastic population model on top of climate affected future habitat distributions. That was sufficient antidote to residual frustration from earlier in the day. Although I worry about using downscaled point predictions of climate in this way - the uncertainty in these predictions is huge.

Friday, November 4, 2011

Not predicting the future

I came across the following quote in an old USFWS report today:
In essence, then, mathematical models applied to real-world situations can be used only as a tool to guide management decisions having future effects on an ecosystem. In contrast, models cannot be used to tell a manager what the future will look like.
Say whut?!? How can you do the first without doing the second? Can someone explain this to me please?

Tuesday, October 25, 2011

Ignoring the evidence?

A big part of what I call the "Decision Theoretic School" of AM focuses on using models to predict future outcomes. However, before you can predict the future, you have to fit the models to existing data, and that's what Skalski et al (2011) did in a very nice article demonstrating the use of population reconstruction methods for age-at-harvest data on American Marten in Michigan. This approach is gaining a lot of ground in terrestrial wildlife management, although its old hat in oceanic fisheries work. There's an abundance of age-at-harvest data in state agency archives just waiting to be put to work. However, these methods require fairly substantial mathematical/statistical/computational know-how to put to work, which is why most agencies still rely on population indices of various sorts. Skalski et al. are critical of this approach:
Use of statistical population reconstruction suggests that the population of martens has been in general decline in Michigan’s UP, a finding not clearly evidenced using more traditional indices of harvest.
They then give examples of 3 harvest indices, two of which are partially or completely consistent with declining populations, and then further conclude that:
Inconsistencies between these traditional harvest indices and the statistical population reconstruction results emphasize the importance of reliable and defensible population estimates, including estimates of precision.
Except that they are not inconsistent! Only the sex ratio index is not indicative of a female bias, and I'm not sure why that would lead to a declining population anyway ... I'd better go back to Skalski et al's great book on wildlife demography and read up on that. Juv/Adult ratios and CPUE seem much more relevant, and they clearly are consistent with a declining population. So it seems that Michigan DNR had data indicating that Marten populations were declining, but failed to do anything about it. Now that they have a "better" analysis, complete with confidence limits, will they act? I suspect not:
Season lengths, harvest quotas, and registered harvests for martens and fishers in Michigan are generally conservative when compared to nearby jurisdictions with harvest seasons.
So harvesters are already more limited in Michigan than elsewhere, and the evidence in favor of a decline is actually not that strong. I've replotted the data in their Table 4 below; they have something like this in Figure 2, but it appears to be incorrect data or typos on the Y-axis.
As you can see, the confidence limits on the abundance are huge, and quite consistent with a population that isn't decreasing at all, or even increasing. The seven models they tested all assume that natural survival and harvest vulnerability are constant across time, so model selection doesn't provide the "population decreasing" evidence. They calculated a value for the population growth rate lambda = 0.94, but provided no confidence limit for this estimate. So, the uncertainty in the abundance has been quantified, and very nicely, but how will managers respond to that uncertainty? The real question for me is why has harvest effort increased 5-fold in 7 years? They mention nothing that suggests a big change in management conditions - an extension of the season from 10 days to 14 days in 2002 is all?

Monday, October 24, 2011

The need to include parameter uncertainty

One of the themes in Population Viability analysis that's been echoing around for a bit is the distinction between sampling variability and environmental variability in vital rate estimates. For instance, if you measure reproductive output for Piping Plovers over 5 years, the variance in reproductive output includes two components - variation between years due to environmental and biotic differences, and pure sampling error due to the fact that you can only measure reproductive output for a sample of nests. Conor McGowan and coauthors have a nice article in the latest issue of biological conservation "Incorporating parametric uncertainty into population viability analysis models", which directly demonstrates the dramatic impact of failing to distinguish between these two sources, and/or to incorporate both of them. Here's the "killer figure":
The top two panels are what you get if you either A) separate temporal and sampling variance, but ignore sampling variance, or B) leave sampling and temporal variance combined as "process variance". The bottom panel shows the impact of separating temporal and sampling variance, and then using them independently in the predictions. The expected trajectory isn't much different. But the variance in the trajectory is much, much bigger in case C. I saw this exact same pattern in regional models of Piping Plover and Interior Least Tern prepared for the USACE on the Missouri River:
This is the distribution of population sizes in 2015, forecast under the "Business as usual" habitat selection strategy, and including sampling variability in the vital rate parameters. The vertical red bar indicates the Recovery Plan target, which is met less than 50% of the time. The trouble with these predictions is that they end up including POSITIVE trajectories as well as negative ones. This tends to make them controversial, because obviously plovers can't increase in the absence of substantial modifications to their habitat, they're threatened. They have to decrease. Don't they?

Tuesday, September 27, 2011

The need for theory

hmmm, that doesn't rhyme quite as well. Ben Bolker brought the following quote from Efron and Tibshirani (1986; "Bootstrap methods for standard errors ...") to my attention:
An important theme of what follows is the substitution of computing power for theoretical analysis. This is not an argument against theory, of course, only against unnecessary theory.
I've often thought of the need for theory as falling along a continuum of 1/n, so when your sample size is small you need strong theory to make predictions, and when large you can get away with less theory. In either case it helps if your theory is well tested in other cases, or you risk making predictions that are completely bogus.

Tuesday, September 6, 2011

Wolf Management reprise

On The Wildlife Society Blog Michael Hutchins criticized Deborah Peter's article in the Huffington Post on the current wolf harvest. One section in particular emphasizes why wolf management will be political, not scientific, and thus not a good candidate for AM:
I hate the fact that Congress intervened in the ESA with regard to wolf management. Management and conservation should be in the hands of scientists and professional managers and not in the hands of politicians. But why did this happen? Precisely because extreme animal rights proponents (and some extreme environmentalists)–unwilling to acknowledge that wolves have indeed recovered, pushed things too far, arguing for no control what-so-ever.
The reason it is political is precisely because different groups hold different values for wolves - ranchers vs. cool headed wildlife scientists vs. extreme animal rights proponents. Last time I looked, people are allowed to have different values, and when they do, politics, not science, will carry the day.

Monday, August 22, 2011

Making Decisions is hard!

Yes! making decisions is hard, and it saps brain energy, which in turn reduces self control! Eat chocolate before crossing the Rubicon!

Wednesday, August 17, 2011

Info gap uncertainty

You can't imagine how dreadfully unhappy I was to discover that not all uncertainty could be handled with probability, even subjective probability. My (former) student Max Post van der Burg wrote a paper on one approach to handling this type of uncertainty in structured population models, using the info-gap terminology developed by Yakov Ben-Haim. Yakov describes the approach in a book, which is both a bit expensive and a bit long for the casual reader. Lots of stuff in there! However, recently Yakov joined the blogosphere with tidbits intended to introduce his ideas in smaller doses.

Schooling one's thoughts

Quite a while back Jim Peterson (now at Oregon State), started me thinking about similarities and differences between approaches to Adaptive Management. One of my students, Jamie McFadden, took on this idea and conducted a small review of published AM studies, which is now available as "Evaluating the Efficacy of Adaptive Management Approaches: Is There a Formula For Success?". In it, Jamie outlines the attributes of AM projects that fall into two camps: the Experimental Resilience camp and the Decision-Theoretic camp. Jamie found that projects in the DT camp were steadily increasing in number, and that they tended to reach a higher level of success - as she defined it. I hope this article stimulates some broader conversation about what AM is and isn't, how to measure success, and how we can continue to improve - I believe it is time to "Adaptively Manage" Adaptive Management.

Tuesday, August 16, 2011

Conceptual models

Kate Buneau of Pacific Northwest National Laboratories sent the following link:
Way complicated, but I love the way the different links light up when you mouse over a node. Positive and negative influences indicated with different links and symbols where the link reaches its target.

I'm reminded of a quote that Stephen Pacala gave in his talk - I can't remember the exact wording - something like the danger of building a complex model of a complex system risks having two things you don't understand - the model and the real system.

Wednesday, August 10, 2011

Horn tooting

One of the things I've been interested in for quite a while is making decisions with poor or no information - what social scientists since Keynes and Knight call uncertainty, meaning that there are no probability distributions available for the outcomes. If we're being honest with ourselves, this characterizes alot of circumstances when dealing with endangered species management. In such circumstances, one possible response is to "satisfice" rather than optimize the management actions.

Earlier this year Max Post van der Burg and I published an article in Ecological Applications Integrating Info-gap Decision Theory With Robust Population Management: A Case Study Using The Mountain Plover" where we used a combination of methods borrowed from robust control theory and satisficing to understand the value of a particular management action to a threatened species. This was a piece of Max's dissertation, and as usual in such things, he did all the hard and important work!

The core idea of "satisficing" is to find a decision that performs good enough, but over the largest possible number of ways of being wrong. In contrast, optimisation focuses on maximizing performance assuming that the system is perfectly understood - i.e. all the parameters are known perfectly and the system model is exactly correct - circumstances that are never true even in the best of times. So an optimal decision will usually outperform a satisficing decision if one's knowledge of the system is perfect, the satisficing decision will continue to do well even if the system model and its parameters are incorrect.
Of course, it is possible that a satisficing strategy is also the optimal strategy, and then we're happiest, but this doesn't seem to happen very often.
Max's contribution was to couple a matrix population model of Mountain Plover with the idea of satisficing to look at how well "nest marking" of Plovers performs as a conservation strategy. The upshot is that even if we are not sure about the life history of this species, nest marking increases the range of "wrongness" under which we will see positive population growth. What we didn't do was evaluate different types of actions against each other - this could easily be done, but was beyond the scope of what we wanted to achieve in the paper.

Monday, August 8, 2011

Quote of the day

From Stephen Pacala's MacArthur lecture this morning:

Never have so many been asked to predict so much while knowing so little ...
He was referring to the models he works on to provide ecological feedbacks to global climate models. He also gave an excellent discussion of some situations where ecological models have been used to support policy decisions, and identified some attributes of the circumstances where they worked. I'm looking forward to seeing the written paper later this year, as he mentioned there is a much longer list of models.

Tuesday, August 2, 2011

It's values folks .... values all the way

Dr. John Marburger, former science advisor to the Bush Administration, was often castigated by the science community for Bush administration policies on things like stem cell research. He passed away at the age of 70 yesterday, and in his obit in the Washington Post there was this quote:

“No one doubts stem cells are valuable to research and hold tremendous promise — on that, there’s no scientific controversy,” he said in 2001. But he added that the matter “is not going to be decided by science.”
This echos the theme I've written about before, that values matter, and we scientists need to get used to that fact.

I'm writing from the National Conference on Ecological Restoration in Baltimore, MD. Plenty of evidence that values matter, and equally much evidence that scientists don't understand that fact.

Thursday, July 28, 2011

Tooting one's own horn

A few years ago ... well OK more like six ... Mike Runge of the USGS gathered a group of regular attendees to the Adaptive Management Conference Series with a number of USFWS employees who had ... issues. Three of them, in fact, and the goal was to see if the sort of quantitative decision theory approaches developed for the Mid-continent Mallard Harvest could be applied to endangered species. It has taken a while, but there will soon be a special issue describing the outputs of that workshop, and the ones that followed.
In the meantime, I was recently asked to summarize what my group did for bull trout in the Lemhi Basin for laypeople. In 800 words. The result looks sharp, but that's because of the pictures more than the words, I think!

Friday, July 22, 2011

Why we should lead with values, not facts

Over the past couple of years I've had some major paradigm shifts. One of those relates to the value of science in debates - recognizing that sometimes, no amount of science is enough. I just read an article by Chris Mooney in MotherJones.com reviewing some very interesting research on how political values affect how we perceive evidence. I've quoted the last few paragraphs below to give context to the very last sentence, which says it all for my new paradigm.

The upshot: All we can currently bank on is the fact that we all have blinders in some situations. The question then becomes: What can be done to counteract human nature itself?

Given the power of our prior beliefs to skew how we respond to new information, one thing is becoming clear: If you want someone to accept new evidence, make sure to present it to them in a context that doesn't trigger a defensive, emotional reaction.

This theory is gaining traction in part because of Kahan's work at Yale. In one study, he and his colleagues packaged the basic science of climate change into fake newspaper articles bearing two very different headlines—"Scientific Panel Recommends Anti-Pollution Solution to Global Warming" and "Scientific Panel Recommends Nuclear Solution to Global Warming"—and then tested how citizens with different values responded. Sure enough, the latter framing made hierarchical individualists much more open to accepting the fact that humans are causing global warming. Kahan infers that the effect occurred because the science had been written into an alternative narrative that appealed to their pro-industry worldview.

You can follow the logic to its conclusion: Conservatives are more likely to embrace climate science if it comes to them via a business or religious leader, who can set the issue in the context of different values than those from which environmentalists or scientists often argue. Doing so is, effectively, to signal a détente in what Kahan has called a "culture war of fact." In other words, paradoxically, you don't lead with the facts in order to convince. You lead with the values—so as to give the facts a fighting chance.

That's it. Values matter. Lead with the values.

Thursday, May 5, 2011

Expert Blogging

I recently wrote a few lines about the need to be able to identify expert bloggers to help non-experts weed out bad information in social networks. There's an interesting article in today's Financial Times on the effect of social networking on access to information. If you're like me, and you don't have a subscription to FT, you can read the excerpts and additional commentary by Roger Pielke, Jr..

Wednesday, May 4, 2011

Wise decisions and predictions

Daniel Sarewitz is a leader in the Science-Policy interface area, and last year he had this to say in an opinion piece in Nature last year:

If wise decisions depended on accurate predictions, then in most areas of human endeavour wise decisions would be impossible. Indeed, predictions may even be an impediment to wisdom. They can narrow the view of the future, drawing attention to some conditions, events and timescales at the expense of others, thereby narrowing response options and flexibility as well.

Would “projections” also lead to the same trap? According to Kevin Trenbarth, the difference is that a projection makes no effort to start from the actual initial state of the system, and so all that can be evaluated is the change from the assumed initial state. As a result, there is no expectation on the part of the “projector” that the projection will actually come to pass. In contrast, a prediction is made in the expectation that the future will look similar to the prediction, although as far as I can tell, the same tools are used for both. Intriguingly this is yet a third way to define the difference between a projection and a prediction. Either way, I think predictions and projections run the risks described by Sarewitz.

Tuesday, April 26, 2011

The first science blogger?

I nominate Johannes Kepler as the first science blogger. Dedre Gentner, in her paper Analogy in Scientific Discovery: The Case Johannes Kepler (2001) writes that
[Kepler] provided a running account of his feelings about the work, including the kind of emotional remarks that no modern scientist would consider publishing.

As an example she offers the following quote from Kepler's Astronomia novae
If I had embarked upon this path a little more thoughtfully, I might have immediately arrived at the truth of the matter. But since I was blind from desire I did not pay attention to each and every part [...] and thus entered into new labyrinths, from which we will have to extract ourselves. (Kepler 1609, pp. 455-456)

Gentner provides a few other choice quotes too - hence I think that if Kepler were around today, he'd be blogging.

Thursday, April 21, 2011

Predicting the future

This is good.

Don't Transform

One of my pet peeves about my ecological colleagues is their tendency to transform binomial data using arcsine of the squareroot of the proportion in order to use a linear model. OK, once upon a time, it might have made sense to do this. But we have better tools now, honestly! Travis Hinkelman brought a great paper by David Warton and Francis Hui to my attention this morning. I'm just going to quote one line, which sort of says it all:
The most striking result in power simulations was that logistic regression and GLMM always had higher power than untransformed and arcsine transformed linear models ...
So, don't transform your binomial data. And please, if you are collecting proportion data, write down both the numerator and denominator! This will be required reading in my Ecological statistics class next fall.

Understanding Government

Here's a bit of a fun (or maybe disturbing) read. It reminds me of something I have to remind myself often - that I was NOT a representative example of an undergraduate student - critical to keep that in mind when evaluating student work! Brigitte Tenhumberg and I were having a conversation about what we should be trying to get our undergraduate students, especially non-biology majors, to understand about ecology. Like Pielke on politicians, we concluded that it isn't reasonable to expect undergraduate students to become experts in ecology, and therefore it isn't possible for them to determine the validity of claims made in the media (including on the internet), about ecological consequences of various events. It seems as though we ought to be teaching them to evaluate the credibility of the people making the claims - but wow! what a can of worms that idea turns out to be.
So, as an "expert" (not sure in what!) offering my opinions up on the internet via this blog, perhaps the most important thing I can do is provide access to evidence that allows readers to evaluate my credibility on a particular claim.
Hmmm, from my bio in the top right corner it takes 3 clicks to reach an (outdated! oops) copy of my CV - and probably only because I know where to look. Googling my name gives access to that same 2 page vitae (2nd hit) and also my Facebook, linkedin, Academia.edu, Mendeley and Flickr profiles. All that tells someone is that I'm addicted to social networking sites ...

Saturday, March 12, 2011

Additivity in wolf harvest

Scott Creel and Jay Rotella conclude:
Examined across populations, human killing of wolves is generally not compensatory, as has been widely argued. Management policies should not assume that an increase in human-caused mortality will be offset by a decline in natural mortality.
Seems pretty cut and dried, and looking at the way they analysed their data, I can't find any reason to disagree with them. Given their result, this is a very balanced and fair statement; they also say that some level of wolf harvest probably is sustainable. However that sustainable harvest is probably lower than proposed in current Montana and Idaho management plans.

Dr. The Bird Man wondered if Adaptive Management could be used to resolve the Additive/Compensatory controversy, along the lines of the North American Waterfowl Harvest Management Plan. I don't think it would help in this instance. The wolf harvest is marked by sharp distinctions in how wolves are valued among various stakeholders. In addition, the institutions tasked with managing wolves are new at the game - for the past 30 years that job has been handled by the USFWS. This means that everyone - for or against harvest of wolves - is learning a new set of skills, interacting with new people, and old people in new ways. In contrast, when the AM plan for waterfowl harvest was adopted in 1995, the institutions managing the harvest had been doing so for decades, using the same types of data, and the value diversity was (and still is) much lower than in the wolf case. It's worth noting that even after 12 years of analyzing data, the waterfowl AM process still couldn't distinguish between compensatory and additive harvest (Nichols et al 2007). I wonder if a meta-analytic approach similar to what Creel and Rotella used wouldn't be more helpful.

This does not mean that careful analysis and thinking about wolf populations won't be useful. The risk is that parties on both sides of the debate substitute arguments about the quality of the science for the real debate about how many wolves we want, or are prepared to live with. That's a value based question, and until the debate turns away from the science and focuses on the emotional, subjective, icky stuff, it'll be hard to resolve anything.

Friday, March 4, 2011

More on the science and politics of wolves.

The High Country News Range blog posted on the wolf controversy, and stated:
what’s become clear in the cacophony regarding wolves in the West is that where emotion rules, research should.
which is interesting, because the conclusion of social scientists who study the science-policy interface is exactly the opposite. It would be all too easy for scientists to fall into the "stealth issue advocacy" trap in controversy over wolves. The issue is a highly polarizing one - people seem to love wolves or hate them. A scientist wishing to connect their science to policy can easily find themselves arguing for a particular position "only based on objective science", ignoring that their values inevitably influence what they research, how they research it, and what conclusions they draw.
A better role for a scientist, albeit more difficult, is to use science to evaluate a range of policy options. This is in fact what Creel and Rotella have provided for the wolf case: based on 21 studies of wolves, what is the relationship between human offtake (harvest or culling), and total mortality? With this relationship in hand, it is possible to evaluate different harvest quotas in terms of the future wolf population size. That may or may not be used by policy makers in Montana, but it certainly should be taken seriously.

Thursday, March 3, 2011

Politics and science

I haven't read (yet) Scott Creel and Jay Rotella's article that is at the heart of this controversy, but I can feel for them. It is necessary to make assumptions when constructing a population model - and if you make different assumptions you'll get different results. The fact that the paper is peer-reviewed increases my confidence that their assumptions are defensible. Unfortunately, the results are not politically palatable! I'm looking forward to digging into their study in detail.

Monday, February 21, 2011

and while we're at it ...

We could use to manage responses from stakeholders to our Facebook Tern and Plover management game.

Visualizing future risk

Now this is way cool. I want to do this with my Tern and Plover predictions for the Platte and Missouri Rivers.

Thursday, February 17, 2011

The anthropocene 1


The anthropocene 1, originally uploaded by atiretoo.

When I look at eastern Nebraska's landscapes, my dominant feeling is anguish - for the loss of what was, and the failure of what is to provide for the future.

Wednesday, February 9, 2011

A better oath

I tweaked the oath for ecological modelers. It doesn't matter if we're relevant as long as we're trying to move forward.

Friday, February 4, 2011

An oath to do no harm

I recently posted a checklist to prevent illicit use of quantitative tools. In the same spirit, I offer the following Hippocratic Oath for Ecological Modelers:

  • I will remember that I didn't make the world and that it doesn't satisfy my equations.

  • Though I will use models boldly to estimate value extinction risk, I will not be overly impressed by mathematics.

  • I will never sacrifice reality for elegance without explaining why I have done so.

  • Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

  • I understand that my work may have enormous trivial effects on society and the economy, many of them beyond my comprehension, but I will continue to try to be relevantbut I will continue to advance my science anyway.

This is adapted from Emmanuel Derman and Paul Wilmott’s oath for economic modelers. See if you can spot my modifications. I tried to be subtle.

Monday, January 31, 2011

Hard decisions call for ignoring predictions?

The biggest issue my children worry about - daily - is whether or not school will be canceled due to inclement weather. Of course, you can guess which decision they are rooting for ... My son is not a fan of the Superintendent of schools because, in his estimation, the superintendent does not call enough snow days. Still, I was surprised when my son said that the superintendent ignores weather forecasts! Amazingly enough:

However, as superintendent of our school district, I will not call a snow day based on a weather forecast. I will call a snow day based on existing weather conditions such as significant snowfall or dangerous wind chills. I will call a snow day based on the city’s ability to make streets passable and our maintenance staff's ability to make our schools accessible and our parking lots clear.
[emphasis added]. So - the National Weather Service digital forecast for the occurrence of precipitation was 82% accurate for Lincoln in the last month. (Aside - I'm not sure how Forecastwatch.com is calculating that number, but it seems to be a good number to me.) Sure, it shouldn't be the only factor involved in making a decision to close schools, but surely it is a useful source of information to make the decision farther ahead. As a parent I appreciate knowing as early as possible that school will be canceled so that I can make alternate arrangements. I can't see how a decision can be made the night before (as it was earlier this month) based on existing weather conditions. Has to be existing weather conditions PLUS A PREDICTION, and if the Superintendent isn't looking at the forecast for the next day, then I guess he's doing it in his head. Maybe he's in the wrong job if he can do a better prediction than the National Weather Service.
I think this is just more evidence that society at large a) doesn't understand the variability of nature, and b) devalues science that only makes probabilistic predictions. Conservation biology is stuffed.

UPDATE: They just called a snow day at 9:30 pm. Someone is using some kind of prediction.

Wednesday, January 19, 2011

Education is bad for you!

John Quinn pointed me to a blog post by Jason Collins about the effect of mathematical education on risk tolerance. Collins was musing about the consequences of a psychology paper from 2008 that demonstrated how one's innate concept of the number line shifts from a logarithmic scale to a linear scale as one is educated in mathematics. They went further, and conducted the same tests with people from Amazonia who had little contact with the outside world - sure enough, adults there also used a logarithmic scale for their concept of number.
Collin's contribution was to connect this to the use of logarithmic utility as a mechanism to model risk aversion in economics - the tendency to avoid a gamble even if the expected outcome is the same. If learning math makes you think linearly, maybe it also reduces risk avoidance! At least if you regularly make decisions by mapping out risk curves ...

Thursday, January 6, 2011

from 10% to certainty in 2 breathes or less

Having just spent a bit of time thinking about how risk and uncertainty are thought about in different disciplines, the sound bite at 9:50 of this video caught my attention! Dr. Larry Brilliant of the Skoll Global Threats Fund describes an expert estimate that there is a 10% chance of a flu pandemic that kills 100 million or more people in the next 10 years - the interviewer responds by saying "So its certain there will be a pandemic, it is just a question of the time frame". Those seem like two radically different statements to me.

Actually it reminds me of an interview with a Nebraska State Legislator on NPR this morning - paraphrased, he said that the BP spill in the Gulf of Mexico made legislators realize that pipeline technology could fail ... and hence they started paying more attention to the Keystone XL pipeline issue. Yes! Of course it can fail! If you drive enough miles, the cumulative probability of having an accident approaches 1! The idea that people need to be certain that an event will occur in order to start thinking about doing something about it is amazing.

Hookahs and Anecdotes

From Andrew Gelman's Blog:

The evidence is certainly all around you pointing in the wrong direction - if you're willing to accept anecdotal evidence - there's always going to be an unlimited amount of evidence which won't tell you anything.
This is in the context of a panel of experts wondering if Hookahs cause lung cancer - one of the esteemed panelists used the fact that an uncle lived to 90 while smoking a hookah every day. I think there is an additional psychological mechanism involved in accepting this kind of anecdotal evidence - it is the direct experience of the person making the claim. Unfiltered by statistics, other people's attention to detail, and possibly dodgy methodology. It is particularly easy to accept anecdotal evidence when the process in question is impossible to experience directly - like the population level risk of cancer - or in my case, density dependent reductions in population vital rates. Even when faced with their own data, plotted in a different way to demonstrate the population consequences, people cling to their own experience. And unfortunately, density dependence isn't something you can experience directly.

Tuesday, January 4, 2011

It's all Dragons in the mind

I just listened to a great podcast on the Psychology of Climate Change. Although Robert Gifford's "Dragons of Inaction" were cast in the framework of climate change, they are all relevant to environmental decision making generally.

Thanks to Kate Buenau for bringing this to my attention.