Bryan Lawrence : Bryan's Blog

Bryan Lawrence

... personal wiki, blog and notes

Bryan's Blog

(Only the last ten entries are here, earlier entries can be found by using the Summary, Archive or Categories pages, or by using the calendar to go to previous months and/or years).

Building your own JASMIN Virtual Machine

I make a good deal of use of the JASMIN science virtual machines, but sometimes I want to just do something locally for testing. Fortunately you can build your own virtual machine using the " JASMIN Analysis Platform" (JAP) to get the same base files.

Here's my experience building a JAP instance in a VMware Fusion virtual machine (I have a Macbook, but I have thus far done all the heavy lifting inside a linux mint virtual machine ... but the JAP needs a centos or redhat base machine, hence this).

Step One: Base Virtual Machine

We want a base linux virtual machine on which we build the JAP.

  1. Start by downloading a suitable base linux installation (Centos or RedHat). Here is one I got some time ago: CentOS-6.5-x86_64-bin-DVD1.iso

  2. From VMware fusion choose the File>New Option and double click on the "Install from Disc or Image" option and find your .iso from the previous step.

  3. Inside the linux easy install configure your startup account

  4. You might want to configure the settings. I chose to give mine 2 cores and 4 GB of memory and access to some shared folders with the host.

  5. Start your virtual Machine.

  6. (Ignore the message about unsupported hardware by clicking OK)

  7. Wait ... do something else ...

  8. Login.

  9. (This is a good place to take a snapshot of the bare machine if you have the available disk space. Snapshots take up as much disk as you asked for memory.)

Step Two: Install the JAP

Following instructions from here. There are effectively three steps plus two wrinkles. The three steps are: get the Extra Packages for Enterprise Linux into your config path; get the CEDA JAP linux into your config path; and build. Then the wrinkles: the build currently fails! However, the fixes to make it build are pretty trivial.

  1. Open up a terminal window and su to root.

  2. Follow the three steps on the installation page, then you'll see something like this:

    --> Finished Dependency Resolution
    Error: Package: gdal-ruby-1.9.2-1.ceda.el6.x86_64 (ceda)
               Requires: libarmadillo.so.3()(64bit)
    ... 
    Error: Package: grib_api-1.12.1-1.el6.x86_64 (epel)
               Requires: libnetcdf.so.6()(64bit)
    ...
                   Not found
     You could try using --skip-broken to work around the problem
     You could try running: rpm -Va --nofiles --nodigest
    

    But never fear, two easy fixes are documented here. You need to

  3. Force the install to use the CEDA grib_api, not the EPEL version, You do that by putting

    exclude=grib_api*
    

    at the end of the first (EPEL) section in the /etc/yum.repos.d/epel.repo file, and

  4. Add the missing (older version of the) armadillo library by downloading the binary rpm on the ticket and installing it locally, then you can redo the final step:

  5. yum install jasmin-sci-vm

And stand back and wait. You'll soon have a jasmin-sci-vm.

by Bryan Lawrence : 2014/08/04 : Categories jasmin : 0 comments (permalink)

simulation documents

In my last post I was discussing the relationship between the various elements of documentation necessary to describe the simulation workflow.

It turns out the key linking information is held in the simulation documents - or should be, but for CMIP5 we didn't do a good job of making clear to the community the importance of the simulation as the linchpin of the process, so many were not completed, or not completed well. It's importance should be clear from the previous figure one, but we never really promulgated that sort of information, and it certainly wasn't clear from the questionnaire interface - where the balance of effort was massively tilted to describing the configured model.

Looking forward, it would seem sensible to separate the collection of information about the simulation from the collection of information about the configured model (and everything else). If we did that, the folks running the simulations could get on and document those in parallel with those documenting the models etc. It would also make the entire thing a bit less daunting.

To that end, I've tried to summarise the key simulation information in one diagram:

Image: static/2014/08/01/simulations.jpg

(Like last time, this is not meant to be formal UML, but something more pleasing on the scientist eye.)

The key information for a simulation appears in the top left box, and most if it could be completed using a text editor if we gave folks the right tools. At the very least it could be done in a far easier manner to allow cut and paste. The hard part was, and still would be, the conformances.

(For now we don't have a specification for how to describe performance and resources, but these are not expected to be hard, and the ground work has already done by Balaji at GFDL.)

One tool we need to provide folks to make the best use of this information would be a way of parsing the code mods and the configured models to create a sort of key,value list which described any given configured model in a way that could be compared in terms of mathematical distance from another model. Such a tool would enable the creation of model genealogies (a la Masson and Knutti, 201) in a completely objective way.

One thing to note is that the simualtioncollection documents allow one to collect together older simulations into new simulation collections, which means that we ought to be able to develop simulation and data collections which exploit old data and old models within the ensembles for new experiments.

(I should say that these blog posts have been a result of conversations with a number of folk, this last one based on a scribble on the back of an envelope from a chat with Eric Guilyardi.)

by Bryan Lawrence : 2014/08/01 : Categories esdoc metafor cmip6 : 0 comments (permalink)

Updated version of the model documentation plots

A couple of weeks ago, I outlined three figures to help describe the model documentation workflow and asked for a bit of feedback.

I've had some feedback, and some more thinking, so here are three updated versions of those plots.

In each figure, the boxes and links are not meant to correspond directly to the UML classes and associations of the (es-doc) Common Information Model (although some do) - the intention is to describe the concepts and intent of the various pieces of documentation.

Figure One

Image: static/2014/07/29/MIPprocess_esdocV2.jpg

The MIPS process involves designing experiments which are provided to modelling centres.

Modelling centres configure model code to produce a configured model, which needs InputData.

They then run one (or many) Simulations to support one or more experiments. The Simulation will conformTo (NumericalRequirements) of the Experiment, and will produce OutputData, and was run on a Platform.

The output data is uploaded to the ESGF archive where is supports the MIP.

Each of the coloured boxes represents an es-doc "document-type". The yellow coloured relationships will be included in a given Simulation document.

Figure Two

Image: static/2014/07/29/simple_esdocV2.jpg

Neglecting the process, we can look at the various document types and what they are for in more detail.

A simulation (which can also be an aggregation of simulations, also known as an ensemble) will have conformed to the requirements of the model and the experiment via conformances. Many of which constrain the input data, so as to meet the input requirements of the model, and which may also have been constrained by one of the numerical requirements of the experiment. Others may affect the code (maybe a choice of a particular code modification, or via a specific parameter choice or choices).

Configured models are described by their Science Properties.

A Simulation document will include all the information coloured in yellow, so it will define which configured model was used via the uses relationship, which will point to configured model document, which itself describes the model.

Similarly, an experiment document will define all the various numerical requirements, and may point to some specific input data requirements.

Ideally output data objects will point back at the simulation which produced them.

Figure Three

Image: static/2014/07/29/Simulations_esdocV2.jpg

In even more detail we can see that numerical requirements come in a number of flavours, including:

  • SpatioTemporalConstraints - which might define required experimental start dates, durations, and/or the coverage (global, regional etc).

  • Forcings - which generally define how various aspects of the system might be represented, for example, providing Ozone files for a consistent static ozone representation.

  • OutputRequirements - which define what output is required for intercomparison.

Simulation conformances are generally linked to specific numerical requirements and consist of data and code-mod conformances.

Currently es-doc does not clearly distinguish between data requirements and data objects, this will be fixed in a future version.

ConfiguredModels are configured from BaseModel code (this is not currently implemented in es-doc). In principle they can be described both by their software properties and the science properties (and so can their sub-components).

The run-time performance of the simulation should also be captured by resource and performance characteristics (but this too is not yet supported by es-doc).

A model description document should include all the material in green, including the links.

by Bryan Lawrence : 2014/07/29 : Categories esdoc : 1 trackback : 0 comments (permalink)

NCAS Science Conference, Bristol

In the middle of two days in Bristol for the NCAS science conference. Good to see what so many of my colleagues are up to (distributed as they are, across the UK).

My talk on the influence of Moore's Law on Atmospheric Science is linked from my talks page.

I wish the rest of the talks were publicly available, there is a lot of good stuff (much of it yet to come, today). The problem with (very slow) peer review as a gold standard is that a lot of stuff only sees the public light of day (even into the relevant science community) long after the work was done, whereas much of it is fit for exposure (and discussion) well before then - but you have to go to the right meeting/workshop/conference. However, some of what is discussed is provocative, and work in progress ... and of course our community is (for good reasons, mostly related to an uneducated public spotlight) somewhat shy of premature publication. It's a conundrum.

by Bryan Lawrence : 2014/07/18 : 0 comments (permalink)

The Influence of Moores Law

NCAS Science Meeting, Bristol, July 2014

I gave a talk on how Moore's Law and friends are influencing atmospheric science, the infrastructure we need, and how we trying to deliver services to the community.

Presentation: pdf (19 MB!)

by Bryan Lawrence : 2014/07/17 : Categories talks (permalink)

The vocabulary of documenting models

Some time ago our European project Metafor migrated to a global project called es-doc. During the metafor project we put a lot of effort into trying to develop materials which describe what it is trying to achieve, but I think we never really got that right (despite a number of papers on the subject - e.g. Lawrence et al 2012 , Guilyardi et al 2013 etc).

I'm currently in the process of trying to produce a figure on the subject for another publication, and in the process have produced these, which I think might be quite useful for understanding what es-doc is trying to document, and what the main kinds of documentation are that it produces. I'd be interested in feedback as to whether these figures are helpful or not.

In each figure, the boxes and links are not meant to correspond directly to the UML classes and associations of the (es-doc) Common Information Model (although some do) - the intention is to describe the concepts and intent of the various pieces of documentation.

Figure One

Image: static/2014/07/09/MIPprocess_esdoc.jpg

The MIPS process involves designing experiments which are provided to modelling centres.

Modelling centres configure model code to produce a configured model.

They then run one (or many) Simulations to support one or more experiments. The Simulation will conformTo (NumericalRequirements) of the Experiment, and will produce OutputData, and was run on a Platform.

The output data is uploaded to the ESGF archive where is supports the MIP.

Each of the coloured boxes represents an es-doc "document-type".

Figure Two

Image: static/2014/07/09/simple_esdoc.jpg

Neglecting the process, we can look at the various document types and what they are for in more detail.

A simulation (which can also be an aggregation of simulations, also known as an ensemble) will have used some input data, some of which may have been defined and/or constrained by one of the numerical requirements of the experiment.

Constraints on the simulation by the numerical requirements might require conformances - which define how those constraints affect either the data or the code (maybe a choice of a particular code modification, or via a specific parameter choice or choices).

Configured models are described by their Science Properties.

A Simulation document will include all the information coloured in yellow, so it will define which configured model was used via the uses relationship, which will point to configured model document, which itself describes the model.

Similarly, an experiment document will define all the various numerical requirements, and may point to some specific input data requirements.

Ideally output data objects will point back at the simulation which produced them.

Figure Three

Image: static/2014/07/09/Simulations_esdoc.jpg

In even more detail we can see that numerical requirements come in a number of flavours, including:

  • SpatioTemporalConstraints - which might define required experimental start dates, durations, and/or the coverage (global, regional etc).

  • Forcings - which generally define how various aspects of the system might be represented, for example, providing Ozone files for a consistent static ozone representation.

  • OutputRequirements - which define what output is required for intercomparison.

Simulation conformances are generally linked to specific numerical requirements and consist of data and code-mod conformances.

ConfiguredModels are configured from BaseModel code (this is not currently implemented in es-doc). In principle they can be described both by their software properties and the science properties (and so can their sub-components).

The run-time performance of the simulation should also be captured by resource and performance characteristics (but this too is not yet supported by es-doc).

A model description document should include all the material in green, including the links.

by Bryan Lawrence : 2014/07/09 : Categories esdoc metafor : 1 trackback : 2 comments (permalink)

Accessing JASMIN from Android

I have a confession to make. I sometimes illicitly work using my android phone and legitimately with my android tablet. Sometimes that work involves wanting to ssh into real computers, and edit files on real computers. One real computer I want to access is JASMIN, and access to JASMIN is controlled by passphrase protected public/private key pairs.

This is how I do it. None of it's rocket science, but I'm documenting it here, since there a zillions of possible tools, and I spent ages working out which ones I could actually use ... so this is to save you that wasted time.

Before we start a couple of words of warning though: To do this you're going to have to put a copy of your private key on your Android device. It's important to realise that means someone with access to your android device is only one step away from access to your computing accounts. In my case, to protect that key I have taken three steps: I have encrypted my android device, I have a very short time (30s) before lockdown, and I have remote wipe enabled. If the bad guys still get access, my key still has a pass phrase! I hope those steps are enough that if I lose my phone/tablet I will realise and block access before anything bad could happen.

You might read elsewhere about how to use dropbear and your public key on your device. Right now, dropbear doesn't support keys with passphrases, so I am not using dropbear (since to do so would require me to remove the passphrase on my private key). This also means that some Android apps for terminal access which use dropbear under the hood (pretty much any that use busybox) can't exploit a properly protected public key. You can use them for lots of things, but don't use them for JASMIN (or similar) access.

Ok, onwards.

Step 1: Get your Key

Get a copy of your ssh private key onto your device. You can do this by any means you like, but ideally you'd do it in a secure way. This is the way I did it:

  1. Make sure you have your private key somewhere where you can get at it by password protected ssh (not key protected).

  2. Load an app onto your device which gives you an scp/ssh enabled terminal to your device. I used AirTerm (which provides a cool floating terminal, very useful on tablets especially). Note that unfortunately AirTerm is not suitable for JASMIN access because it appears to use dropbear under the hood, and so it can't handle our private/public key pairs.

    • (If you are using AirTerm, you will want to go to preferences and install kbox to follow what I've done.)

  3. Fire up AirTerm, and change working directory:

    cd /storage/sdcard0
    

  1. Make a directory for your key, and change into it, something like

    mkdir mystuff
    cd mystuff
    

  1. Now scp your private key into that directory, something like:

    scp you@yourhost.wherever:path_to_key/your_private_key .
    

  1. You may or may not need to move the copy of your your_private_key on yourhost.wherever, depending on whether it's secure there.

(You're done with AirTerm for now, but I'm sure you'll find lots of other uses for it.)

Step 2: Get and Configure JuiceSSH

I use JuiceSSH to work with remote sites which use the key infrastructure. It has lots of nice properties (especially the pop up terminal keyboard) and it can manage connections and identities, and handle multi-hop ssh connections (e.g. for JASMIN, as needed to get via the login nodes to the science nodes).

JuiceSSH is pretty straightforward. Here's what you need for JASMIN.

  1. Fire up JuiceSSH. You will need a password for juice itself. Select something memorable and safe ... but different from your private key passphrase! (If you're like me, and forget these things, you might want to exploit something like lastpass to put this password in your vault).

  2. Add your JASMIN identity:

    • Select a nickname for this identity, give it your jasmin username, and then choose the private key option. Inside there let smart search find your public key (and then tick it). Update and save.

  3. Now add a connection to jasmin-login1, the options are pretty straightforward. You can test it straight away. If it doesn't work, ask yourself if you need to use a VPN client on your phone/tablet first to put yourself in the right place for JASMIN to take an inbound connection.

  4. You can add a direct connection to jasmin-sci1 (or your science machine) by using the via option in the setup. Here's an example of how to configure that.

    Image: static/2013/11/13/juice.png

    In that example, "jasmin login" is the nickname I gave to the connection to jasmin-login1, and "bnl jasmin pk" is the nickname for my jasmin identity.

Now you can access JASMIN, but what about editing files?

Step 3: Get and Configure DroidEdit

I'm using Droidedit as my tool of choice for editing on Android, although, to be fair, I'm not doing that much editing on Android yet so I'd be interested in hearing if there are other better tools. I primarily chose it because it has support for both pki and editing remote files via sftp.

Once you have DroidEdit, you need to go into settings, and to into the SFTp/FT actions "Add Remote Server". Configure with path to your private key, and save using the server address jasmin-login1.ceda.ac.uk. Ignore the fact that the test fails (note that it prompts you for a password, not a passphrase).

Then, go back out, and try and open a file on jasmin, this time you'll get prompted for a passphrase, and voila, it should just work (as before, make sure your android device has used a VPN or whatever to be "in the right place").

by Bryan Lawrence : 2013/11/13 : Categories jasmin : 0 comments (permalink)

vertical resolution

Last week I pointed out that I wasn't at all sure the analysis by LFR89 really applied at modern horizontal grid resolutions, since the vertical scales implied for quasi-geostrophic motion didn't make sense.

I've done a wee bit more delving, and now I'm sure it's not appropriate. The analysis LFR89 did was based on the solutions of the "quasi-geostrophic psueudo-vorticity equation". This is a venerable equation, first derived by a couple of folks, but formalised by Jules Charney. It's derived by carrying out a scale analysis of the primitive equations of motion, suitable for ''large scale motions where departures from hydrostatic and geostrophic equilibrium are small". I still haven't done the rederivation myself (it's a lot of bookwork, and desultory attempt to setup sympy to do it ran out of time), but Charney himself (Charney,1971) in an interesting paper (of some relevance here) put some bounds on the various scales of validity (see his equation 9). As a consequence , Charney points out that these equations define a band of specific horizontal and vertical scales! The fastest way to get to that band is to go back to the fuller derivation in Charney and Stern, 1961 where we get the constraints laid out more easily in the scale analysis. In particular, the quasi-Boussinesq and hydrostatic approximations give us:

L2/D < g/f2 (~109) and D/L < 0.1

Putting L=100km into those equations suggest that D should lie between 1 and 10km, which isn't quite the same as we get from the assertion in LFR89, that all vertical scales may appear and they are related to the expression:

L = D * N/f

(which gives us D=1km, which I think is more to do with the scales of the baroclinic wave solutions to the QG equation). Of course this scale analysis needs to be evaluated carefully in practice, and in particular, ever smaller values of L and D may have those kinds of wave solutions in the QG equations, but the equations themselves are no longer valid.

I'd be happy if someone could be bothered to do the derivation with constraints properly, and evaluate them completely for all scales, but until they do, I'm not going to invest too much more time in using LFR89 to give me "large" scale constraints on the vertical resolution.

Charney's paper is interesting and relevant since it also points out that the larger scales do not feed energy to the smaller scales, but it says nothing significant about other scales, such as those involved in fronts and blocking and gravity waves. As I've already argued, I think these tell us more of what we need to know. That's where we'll go next.

Update 19/09/13: I have slightly edited this post since when I didn't like the tone of a particular sentence when the dust had settled; on rereading it a day later it carried connotations that were not intended. I fixed that and added a clarifying sentence or two.

by Bryan Lawrence : 2013/09/18 (permalink)

zotero, zandy, greader, evernote, and me

Quite a while ago (i.e. years), I decided that managing my bibliographic information in a bibtex file wasn't working any longer. Back then I had a look at Mendeley and Zotero. I can't really remember why, but I chose Zotero (I think it was a combination of how it worked for me, I played with both, and I didn't like having to use their PDF viewr. I also had some worries about Mendeley and the software and information IPR ... when Elsevier bought out Mendeley I felt vindicated on the latter.)

Anyway, now Zotero is a pretty integral part of my working environment. I use zotero standalone on my linux laptop (which is also my desktop when it's in a docking station). I make heavy use of zotfile to migrate papers to and from my Android tablet for reading (I no longer print out anything). I like being able to annotate my PDFs on the tablet, and in particular, having anything I highlighted being pulled out automagically when zotfile pulls the papers back off the tablet.

However, there are two issues with that workflow that bug me. I'd like my PDF library to be completely synchronised for offline reading on my tablet, and I'd like a fully featured native zotero client on the tablet (and my Galaxy Note phone). Zandy is the only Android app for zotero, and while it has some useful functionality (it synchronises the metadata so at least one can check on the phone/tablet if something is in my library), it doesn't synchronise the attachments completely.

(I do use box.net to synchronise my attachments out from zotero standalone via webdav, which works, but one can only use it to effectively download attachments one by one to Zandy - there is no bulk download facility, and no way to annotate and upload back - it's one way sync! But you can view stuff without going to the journal which can be useful for memory jogging.)

The other thing I can't do on my Android devices, and in particular my phone, is effectively create zotero information. There are ways, I could:

  • Manually enter the information in Zandy (no thanks, the whole point of zotero is to avoid manual bibliographic entry where possible). (There is a scanner option, but I'm mostly dealing with papers found on journal websites.)

  • Use the zotero bookmarklet on chrome, well yes, that's possible, but it fails miserably on the AMS journal websites, and requires an inordinate amount of clicking and typing. (The way you use the bookmarklet is to start typing it's bookmark name into the address-bar of the page you are looking at, and if it can find a translator, it loads it into zotero.)

What I really want to do, is from a feed reader, share a journal entry straight into zotero. I can nearly do this. However, from greader, if I

  • Share to Zandy, I basically just get the paper loaded as web page, and I have to manually fix it all later. This isn't necessarily a bad option, at least I get something, but it's often not enough, unless I do that manual step. You can guess how often I do it ...

  • Share to evernote, I can at least get the abstract and most of the body out of the RSS/atom straight into evernote (again the AMS journal feeds are hopeless). But now I have my bibliographic information in two places: abstracts in evernote, and full papers with proper references in Zotero. Searching is cumbersome.

Anyone got a better solution for (zotero based) bibliographic handling from Android (or a way of encouraging Avram Lyon, the Zandy author, to get back into active development)?

I need it to work from Android, because it's in the nature of my job that I spend a lot of time before and after meetings, travelling etc, when being able to interact with the scientific literature on my phone and tablet would make me more productive. Indeed, I do most of my journal paper triage on my phone! (No, I am not going to consider becoming an apple fan-boy!)

(Of course most of this is pointless if the paper is invisible behind a paywall. Invective removed by the editor/author.)

by Bryan Lawrence : 2013/09/11 : 1 comment (permalink)

Vertical and Horizontal Resolution

I've been delving in the literature a bit this week ... considering model resolution and various issues around it. This post is by way of notes from my reading.

One of the things to consider at any time is do we have enough resolution. Most climate scientists will tell you they need more horizontal resolution, but fewer will concede they need more vertical resolution.

It should be (but appears to be not) well known that just as one has to consider changing the time-step as horizontal resolution is increased, one needs to consider whether there is enough vertical resolution. This issue was dealt with quite a time ago in Lindzen and Fox-Rabinovitz (1989) (hereafter LFR89). There have been some recent follow-ups on the importance for chemistry (e.g Kent et al 2012) and on models performance in general (e.g.Marques et al 2011). (It's probably worth pointing out that the latter, and references therin, point out that model convergence to reality depends as much on how the physics deals with resolution as on the dynamics, but that's a point for another day ... but if you want to go there, you could look at Pope and Stratton 2002 and Pope et al 2001, although I have to say both do a bit of special pleading to rule out extra vertical resolution.)

Anyway, I thought it might be interesting to tabulate what sorts of resolution are actually needed for various tasks. It's important to note that LFR89's analysis comes up with different resolutions for different tasks and at different latitudes. So, if we're to take LFR89 at face-value and we're interested in quasi-geostrophic scales, then we can extend their table to modern model resolutions:

  dx (deg)    dz equator    dz 60    dz 45    dz 22.5  
  0.25    1m (!!)    84m    97m    69m
  0.5    3m (!!)    170m    190m    140m  
  1    14m (!)    340m    390m    270m 
  2    54m (!)    670m    780m    550m  
  5    340m    1700m    1900m    1400m  

Clearly there is a problem with this analysis in the tropics at all scales, and everywhere at 25km. Common sense suggests one can't have atmospheric phenomena with horizontal scales of over 50km with vertical scales of 1m. Pretty obviously the scaling assumptions that underly the LFR89 use of quasi-geostrophy are broken. Which brings us to a moot point in interpreting LFR89. If one starts with a QG equation, we've already rejected a bunch of small scales which LFR89 have coming out of the analysis at modern high resolution scales. We probably need to rethink the analysis! (Which is to say, here and now, I'm not going to do that rethinking :-).1

Fortunately for me (in terms of analysis), right now, I'm less interested in the large-scale horizontal flows, but in gravity waves. There the analysis of LFR89 is a bit more timeless. However, the analysis pretty much says, if you're interested in breaking gravity waves you need infinite resolution. However, they then back off and do a bit of a fudge around effective damping to suggest to resolve gravity wave processes one need resolutions of roughly 0.006*the grid resolution in degrees. For the horizontal resolutions above, that gives us something like 1.5,3,6,12 and 30m vertical resolutions.

That doesn't look likely any time soon!

Another approach is to look at what people have thought they need (and why). One of the reasons I started all this thinking was because I was wondering how easy it would be to repeat Watanabe et al (2008)'s work with the UM. Watanabe et al used a T213L256 model with a model top at 85km, having done a lot of previous work evaluating L250 type models. This is roughly a 0.5 degree model using the table above, and has an average vertical resolution of about 300m, which is not too far from LFR89 in the table above (at least using the value of N discussed in the footnote). Most other models seem to fall well short of that. For the UM, even studies which look at resolved gravity waves in the stratosphere have relatively coarse resolutions, e.g. Shutts and Vosper (2011) use 70 levels to a model top at 80km (again with a model resolution around 0.5 degrees). However, in that model the standard configuration had a time-stepping regime which filtered out resolved gravity waves, so the vertical resolution was constrained by being the same as the standard model, when used in a model which didn't filter gravity waves. Similarly, Bushel et al (2010) in a study looking at tropical waves and their interaction with ozone use, used a relatively low horizontal and vertical resolution (between 1 and 4 degrees horizontally) and L60 to 84km - but again, resolved gravity waves were filtered out, and parameterisations were used.

As an aside, one of the arguments in Pope et al 2001 as to why vertical resolution is less important in the tropics is a reference to Nigam et al 1986 who they assert show that non linear processes smooth fields naturally so as to diminish vertical resolution requirements. This is one of the cases where I have some of my own opinions, see, for example Rosier and Lawrence, 1999, discussing, amongst other things pancake structures with small vertical scales in the tropical stratosphere. Given it seems that there is now a body of evidence suggesting that the troposphere does react dynamically to the middle-atmosphere in climatically important ways, that brings me nicely back to wanting more vertical resolution ... even if we buy that it's not needed in the troposphere, and I'm a long way from buying that ... yet (particularly given recent results looking at blocking and resolution in CMIP5 models: Anstey et al, 2013.)

However, for the UM, before I worry too much about the vertical resolution, I've got to get to the bottom of the time-step filtering I alluded to above.

1: That said, to repeat their table, I had to replace a 3 in their approximation for N with a 2, and in fact, I'd rather use a tropospheric average of N~0.012, in which case we get nearly a factor of 2 larger required resolution. However, the fundamental issue is still that I would prefer to work through the assumptions of the QG approximation, I think there is a problem in there ... but I don't have time now. (ret).

by Bryan Lawrence : 2013/09/09 : 1 comment (permalink)


DISCLAIMER: This is a personal blog. Nothing written here reflects an official opinion of my employer or any funding agency.