The UK National River Flow Archive serves daily streamflow data, spatial rainfall averages and information regarding elevation, geology, land cover and FEH related catchment descriptors.
There is currently an API under development that in future should provide access to the following services: metadata catalogue, catalogue filters based on a geographical bounding-box, catalogue filters based on metadata entries, gauged daily data for about 400 stations available in WaterML2 format, the OGC standard used to describe hydrological time series.
The information returned by the first three services is in JSON format, while the last one is an XML variant.
The RNRFA package aims to achieve a simpler and more efficient access to data by providing wrapper functions to send HTTP requests and interpret XML/JSON responses.
The rnrfa package depends on the gdal library, make sure you have it installed on your system before attempting to install this package.
R package dependencies can be installed running the following code:
This demo makes also use of external libraries. To install and load them run the following commands:
The function stations_info() returns a vector of all NRFA station identifiers.
The function catalogue() retrieves information for monitoring stations. The function, used with no inputs, requests the full list of gauging stations with associated metadata. The output is a tibble containing one record for each station and as many columns as the number of metadata entries available.
The columns are briefly described below (see also API documentation):
idThe station identifier.
nameThe station name.
catchment-areaThe catchment area (in km2).
grid-referenceThe station grid reference. For JSON output the grid-reference is represented as an object with the following properties:
ngr(String) The grid reference in string form (i.e. “SS9360201602”).
easting(Number) The grid reference easting (in metres).
northing(Number) The grid reference northing (in metres).
lat-longThe station latitude/longitude. For JSON output the lat-long is represented as an object with the following properties:
string(String) The textual representation of the lat/long (i.e. “50°48’15.0265”N 3°30’40.7121“W”).
latitude(Number) The latitude (expressed in decimal degrees).
longitude(Number) The longitude (expressed in decimal degrees).
riverThe name of the river.
locationThe name of the location on the river.
station-levelThe altitude of the station, in metres, above Ordnance Datum or, in Northern Ireland, Malin Head.
eastingThe grid reference easting.
northingThe grid reference northing.
station-informationBasic station information: id, name, catchment-area, grid-reference, lat-long, river, location, station-level, measuring-authority-id, measuring-authority-station-id, hydrometric-area, opened, closed, station-type, bankfull-flow, structurefull-flow, sensitivity. category.
The same function catalogue() can be used to filter stations based on a bounding box or any of the metadata entries.
# Filter based on minimum recording years catalogue(min_rec = 100) # Filter stations belonging to a certain hydrometric area catalogue(column_name="river", column_value="Wye") # Filter based on bounding box & metadata strings catalogue(bbox, column_name="river", column_value="Wye") # Filter stations based on threshold catalogue(bbox, column_name="catchment-area", column_value=">1") # Filter based on minimum recording years catalogue(bbox, column_name = "catchment-area", column_value = ">1", min_rec = 30) # Filter stations based on identification number catalogue(column_name="id", column_value=c(3001,3002,3003))
The RNRFA package allows convenient conversion between UK grid reference and more standard coordinate systems. The function “osg_parse()”, for example, converts the string to easting and northing in the BNG coordinate system (EPSG code: 27700), as in the example below:
The same function can also convert from BNG to latitude and longitude in the WSGS84 coordinate system (EPSG code: 4326) as in the example below.
osg_parse() also works with multiple references:
The first column of the table “someStations” contains the id number. This can be used to retrieve time series data and convert waterml2 files to time series object (of class zoo).
The National River Flow Archive serves two types of time series data: gauged daily flow and catchment mean rainfall.
These time series can be obtained using the functions gdf() and cmr(), respectively. Both functions accept three inputs:
id, the station identification numbers (single string or character vector).
metadata, a logical variable (FALSE by default). If metadata is TRUE means that the result for a single station is a list with two elements: data (the time series) and meta (metadata).
cl, This is a cluster object, created by the parallel package. This is set to NULL by default, which sends sequential calls to the server.
Here is how to retrieve mean rainfall (monthly) data for Shin at Lairg (id = 3001) catchment.
# Fetch only time series data from the waterml2 service info <- cmr(id = "3001") plot(info) # Fetch time series data and metadata from the waterml2 service info <- cmr(id = "3001", metadata = TRUE) plot(info$data, main=paste("Monthly rainfall data for the", info$meta$stationName,"catchment"), xlab="", ylab=info$meta$units)
Here is how to retrieve (daily) flow data for Shin at Lairg (id = 3001) catchment.
# Fetch only time series data info <- gdf(id = "3001") plot(info) # Fetch time series data and metadata from the waterml2 service info <- gdf(id = "3001", metadata = TRUE) plot(info$data, main=paste0("Daily flow data for the ", info$meta$station.name, " catchment (", info$meta$data.type.units, ")"))
By default, the functions
getTS() can be used to fetch time series data from multiple site in a sequential mode (using 1 core):
Upgrade your data.frame to a data.table:
Create interactive maps using leaflet:
Interactive plots using dygraphs:
Sequential vs Concurrent requests: a simple benchmark test
library(parallel) # Use detectCores() to find out many cores are available on your machine cl <- makeCluster(getOption("cl.cores", detectCores())) # Filter all the stations within the above bounding box someStations <- catalogue(bbox) # Get flow data with a sequential approach system.time(s1 <- gdf(someStations$id, cl = NULL)) # Get flow data with a concurrent approach (using `parLapply()`) system.time(s2 <- gdf(id = someStations$id, cl = cl)) stopCluster(cl)
The measured flows are expected to increase with the catchment area. Let’s show this simple regression on a plot:
# Calculate the mean flow for each catchment someStations$meangdf <- unlist(lapply(s2, mean)) # Linear model library(ggplot2) ggplot(someStations, aes(x = as.numeric(`catchment-area`), y = meangdf)) + geom_point() + stat_smooth(method = "lm", col = "red") + xlab(expression(paste("Catchment area [Km^2]",sep=""))) + ylab(expression(paste("Mean flow [m^3/s]",sep="")))